{"id":10128,"date":"2017-05-25T12:34:12","date_gmt":"2017-05-25T10:34:12","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/"},"modified":"2023-06-08T16:34:26","modified_gmt":"2023-06-08T14:34:26","slug":"history-of-upgrading-9-tb-postgresql-database","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/","title":{"rendered":"History of Upgrading 9 Tb PostgreSQL database"},"content":{"rendered":"<p><strong>Mouhamadou Diaw<\/strong><\/p>\n<p>In this blog I am going share a history of PostgreSQL migration and upgrade from 9.2 to 9.6. Let me first explain the context<br \/>\nWe have a PostgreSQL environment with following characteristics. Note that real database name, server name are changed for security reason<br \/>\n<b>Host<\/b>: CentOS release 6.4<br \/>\n<b>PostgreSQL version<\/b>: 9.2<br \/>\n<b>Database size <\/b>: 9Tb<\/p>\n<p><code><br \/>\npostgres=# select version();<br \/>\nversion<br \/>\n---------------------------------------------------------------------------------------------------<br \/>\n------------<br \/>\nPostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-<br \/>\n52), 64-bit<br \/>\n(1 row)<br \/>\npostgres=#<br \/>\n<\/code><\/p>\n<p><code><br \/>\npostgres=# select pg_size_pretty(pg_database_size('zoulou'));<br \/>\npg_size_pretty<br \/>\n----------------<br \/>\n9937 GB<br \/>\n(1 row)<br \/>\n<\/code><\/p>\n<p>The problem is that we were just on limit about space in the actual server and we plan to move the database in a new server and after to upgrade to 9.6 version.<\/p>\n<p>The questions we had to answer were following:<\/p>\n<p>Do we keep the same Linux version?<br \/>\nHow do we transfer data to the new server? pgdump, , pg_basebackup, using cp to copy of datafiles\u2026<br \/>\nWhat is the fastest way to upgrade 9T of data?<\/p>\n<p>Finally we decide<\/p>\n<p>To use Debian instead of Centos. Why, because the sysadmin prefers Debian<br \/>\nWe can summarize the two environments by following picture.<\/p>\n<p><a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-16745\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png\" alt=\"schema\" width=\"300\" height=\"168\" \/><\/a><\/p>\n<p>To transfer data to the new server server2 we decide to use rsync. The reason was that the export and the import will take a long time. So while database is running we launch following <span style=\"color: #ea6d14\">rsync<\/span> command on server1.<br \/>\n<code>rsync -av -P --bwlimit=30720 \/opt\/PostgreSQL\/9.2\/data\/* server2:\/u02\/pgdata\/zoulou\/<\/code><\/p>\n<p>The bandwidth was reduced because people are complaining about network. The rsync with database running takes 3 days to finish, yes 3 days. And few days after we ask a downtime to stop the cluster to rsync the delta. Indeed as you may know it\u2019s necessary to have a coherent copy to be able to start the cluster. Note that the copy of the delta lasts 2 hours.<\/p>\n<p>Now it\u2019s time to install the 2 versions of PostgreSQL (9.2.4 and 9.6.2) on the new server. We will show here just the main steps.<\/p>\n<p>With apt-get we install the required packages<br \/>\n<code><br \/>\napt-get install libldap2-dev libpython-dev libreadline-dev libssl-dev bison flex libghc-zlib-dev libcrypto++-dev libxml2-dev libxslt1-dev tcl tclcl-dev bzip2 wget screen ksh libpam0g-dev libperl-dev make unzip libpam0g-dev tcl-dev python<br \/>\n<\/code><\/p>\n<p>One first important thing is to use the same options than the source for PostgreSQL installation. We use the pg_config command in the source server to retrieve these options. The installation of 9.2 is described below (9.6.2 install is the same)<br \/>\n<code><br \/>\nPGHOME=\/u01\/app\/postgres\/product\/92\/db_4<br \/>\nSEGSIZE=1<br \/>\nBLOCKSIZE=8<br \/>\nWALSEGSIZE=16<br \/>\n<\/code><br \/>\n<code><br \/>\n.\/configure --prefix=${PGHOME} \\<br \/>\n--exec-prefix=${PGHOME} \\<br \/>\n--bindir=${PGHOME}\/bin \\<br \/>\n--libdir=${PGHOME}\/lib \\<br \/>\n--sysconfdir=${PGHOME}\/etc \\<br \/>\n--includedir=${PGHOME}\/include \\<br \/>\n--datarootdir=${PGHOME}\/share \\<br \/>\n--datadir=${PGHOME}\/share \\<br \/>\n--with-pgport=5432 \\<br \/>\n--with-perl \\<br \/>\n--with-python \\<br \/>\n--with-tcl \\<br \/>\n--with-openssl \\<br \/>\n--with-pam \\<br \/>\n--with-ldap \\<br \/>\n--with-libxml \\<br \/>\n--with-libxslt \\<br \/>\n--with-segsize=${SEGSIZE} \\<br \/>\n--with-blocksize=${BLOCKSIZE} \\<br \/>\n--with-wal-segsize=${WALSEGSIZE}<br \/>\n<\/code><br \/>\n<code><br \/>\nmake world<br \/>\nmake install<br \/>\ncd contrib\/<br \/>\nmake install<br \/>\ncd ..\/doc\/<br \/>\nmake install<br \/>\n<\/code><\/p>\n<p>Now that data are copied and PostgreSQL softwares installed, we can start on server2 the 9.2 cluster.<\/p>\n<p><code>\/u01\/app\/postgres\/product\/92\/bin\/pg_ctl  -D  \/u02\/pgdata\/zoulou\/<\/code><\/p>\n<p>Note that the first attempt failed due to local variables. On the source the system is using en_US.UTF8 and on the new server the system is using ch_FR.UTF-8. So we change local values using this command<br \/>\n<code># dpkg-reconfigure locales<\/code><\/p>\n<p>Once the cluster 9.2 started without errors we can now think about the upgrade. With a database size of 9T to upgrade, we decide to use the \u2013link option. Indeed this option uses link instead of copying files to new cluster. This will definitively speedup the upgrade process. While speeding up the upgrade, with this method, if the upgrade fails you cannot restart the old cluster on the same server.<\/p>\n<p>The first step for the upgrade is to initialize a new 9.6.2 cluster on the new server server2<br \/>\n<code><br \/>\n\/u01\/app\/postgres\/product\/96\/db_2\/bin\/initdb --pgdata=\/u02\/pgdata\/zoulou962\/ --xlogdir=\/u03\/ZOULOU962\/pg_xlog\/ --pwprompt --auth=md5<br \/>\nThe files belonging to this database system will be owned by user \"postgres\".<br \/>\nThis user must also own the server process.<br \/>\nThe database cluster will be initialized with locale \"en_US.UTF-8\".<br \/>\nThe default database encoding has accordingly been set to \"UTF8\".<br \/>\nThe default text search configuration will be set to \"english\".<br \/>\nData page checksums are disabled.<br \/>\nEnter new superuser password:<br \/>\nEnter it again:<br \/>\nfixing permissions on existing directory \/u02\/pgdata\/zoulou962 ... ok<br \/>\nfixing permissions on existing directory \/u03\/zoulou962\/pg_xlog ... ok<br \/>\ncreating subdirectories ... ok<br \/>\nselecting default max_connections ... 100<br \/>\nselecting default shared_buffers ... 128MB<br \/>\nselecting dynamic shared memory implementation ... posix<br \/>\ncreating configuration files ... ok<br \/>\nrunning bootstrap script ... ok<br \/>\nperforming post-bootstrap initialization ... ok<br \/>\nsyncing data to disk ... ok<br \/>\nSuccess. You can now start the database server using:<br \/>\n\/u01\/app\/postgres\/product\/96\/db_2\/bin\/pg_ctl -D \/u02\/pgdata\/zoulou962\/ -l logfile start<br \/>\n<\/code><\/p>\n<p>We copy configuration files of the 9.2 cluster to the 9.6 cluster<\/p>\n<p><code><br \/>\nmv postgresql.conf postgresql.conf_origin<br \/>\nmv pg_hba.conf pg_hba.conf_origin<br \/>\ncp ..\/zoulou962\/postgresql.conf .<br \/>\ncp ..\/zoulou962\/pg_hba.conf .<br \/>\n<\/code><\/p>\n<p>After we stop the 9.2 cluster on the new server<br \/>\n<code><br \/>\nwhich pg_ctl<br \/>\n\/u01\/app\/postgres\/product\/92\/db_4\/bin\/pg_ctl<br \/>\npostgres@apsicpap01:~$ pg_ctl stop<br \/>\nwaiting for server to shut down.... done<br \/>\nserver stopped<br \/>\n<\/code><\/p>\n<p>Once the 9.2 cluster stopped we run the pg_upgrade command with the \u2013c option.<br \/>\nThe option \u2013c checks clusters only, don&#8217;t change any data.<\/p>\n<p><code><br \/>\nu01\/app\/postgres\/product\/96\/db_2\/bin\/pg_upgrade -d \/u02\/pgdata\/zoulou\/ -D \/u02\/pgdata\/zoulou962\/ -b \/u01\/app\/postgres\/product\/92\/db_4\/bin\/ -B \/u01\/app\/postgres\/product\/96\/db_2\/bin\/ -c<br \/>\nPerforming Consistency Checks<br \/>\n-----------------------------<br \/>\nChecking cluster versions                                   ok<br \/>\nChecking database user is the install user                  ok<br \/>\nChecking database connection settings                       ok<br \/>\nChecking for prepared transactions                          ok<br \/>\nChecking for reg* system OID user data types                ok<br \/>\nChecking for contrib\/isn with bigint-passing mismatch       ok<br \/>\nChecking for roles starting with 'pg_'                      ok<br \/>\nChecking for invalid \"line\" user columns                    ok<br \/>\nChecking for presence of required libraries                 ok<br \/>\nChecking database user is the install user                  ok<br \/>\nChecking for prepared transactions                          ok<br \/>\n*Clusters are compatible*<br \/>\n<\/code><\/p>\n<p>And now we are ready to upgrade with the \u2013link option. I was surprised how fast the upgrade was. Yes we upgrade 9T of database in less than 3 minutes. Incredible this \u2013link option.<br \/>\n<code><br \/>\n\/u01\/app\/postgres\/product\/96\/db_2\/bin\/pg_upgrade -d \/u02\/pgdata\/zoulou\/ -D \/u02\/pgdata\/zoulou962\/ -b \/u01\/app\/postgres\/product\/92\/db_4\/bin\/ -B \/u01\/app\/postgres\/product\/96\/db_2\/bin\/ --link<br \/>\nPerforming Consistency Checks<br \/>\n-----------------------------<br \/>\nChecking cluster versions                                   ok<br \/>\nChecking database user is the install user                  ok<br \/>\nChecking database connection settings                       ok<br \/>\nChecking for prepared transactions                          ok<br \/>\nChecking for reg* system OID user data types                ok<br \/>\nChecking for contrib\/isn with bigint-passing mismatch       ok<br \/>\nChecking for roles starting with 'pg_'                      ok<br \/>\nChecking for invalid \"line\" user columns                    ok<br \/>\nCreating dump of global objects                             ok<br \/>\nCreating dump of database schemas<br \/>\n..                                                         ok<br \/>\nChecking for presence of required libraries                 ok<br \/>\nChecking database user is the install user                  ok<br \/>\nChecking for prepared transactions                          ok<br \/>\n..<br \/>\nIf pg_upgrade fails after this point, you must re-initdb the<br \/>\nnew cluster before continuing.<br \/>\n..<br \/>\nPerforming Upgrade<br \/>\n------------------<br \/>\nAnalyzing all rows in the new cluster                       ok<br \/>\nFreezing all rows on the new cluster                        ok<br \/>\nDeleting files from new pg_clog                             ok<br \/>\nCopying old pg_clog to new server                           ok<br \/>\nSetting next transaction ID and epoch for new cluster       ok<br \/>\nDeleting files from new pg_multixact\/offsets                ok<br \/>\nSetting oldest multixact ID on new cluster                  ok<br \/>\nResetting WAL archives                                      ok<br \/>\nSetting frozenxid and minmxid counters in new cluster       ok<br \/>\nRestoring global objects in the new cluster                 ok<br \/>\nRestoring database schemas in the new cluster<br \/>\n..                                                          ok<br \/>\nSetting minmxid counter in new cluster                      ok<br \/>\nAdding \".old\" suffix to old global\/pg_control               ok<br \/>\n..<br \/>\nIf you want to start the old cluster, you will need to remove<br \/>\nthe \".old\" suffix from \/u02\/pgdata\/zoulou\/global\/pg_control.old.<br \/>\nBecause \"link\" mode was used, the old cluster cannot be safely<br \/>\nstarted once the new cluster has been started.<br \/>\n..<br \/>\nLinking user relation files<br \/>\n..                                                          ok<br \/>\nSetting next OID for new cluster                            ok<br \/>\nSync data directory to disk                                 ok<br \/>\nCreating script to analyze new cluster                      ok<br \/>\nCreating script to delete old cluster                       ok<br \/>\n..<br \/>\nUpgrade Complete<br \/>\n----------------<br \/>\nOptimizer statistics are not transferred by pg_upgrade so,<br \/>\nonce you start the new server, consider running:<br \/>\n.\/analyze_new_cluster.sh<br \/>\n..<br \/>\nRunning this script will delete the old cluster's data files:<br \/>\n.\/delete_old_cluster.sh<br \/>\n<\/code><br \/>\nWe delete the old cluster<br \/>\n<code>.\/delete_old_cluster.sh<\/code><br \/>\nAnd we start the new 9.6 cluster<br \/>\n<code><br \/>\npg_ctl start<br \/>\n2017-05-11 09:11:42 CEST LOG:  redirecting log output to logging collector process<br \/>\n2017-05-11 09:11:42 CEST HINT:  Future log output will appear in directory \"\/u99\/zoulou962\/pg_log\".<br \/>\n<\/code><br \/>\nWe generate statistics<br \/>\n<code><br \/>\n.\/analyze_new_cluster.sh<br \/>\nThis script will generate minimal optimizer statistics rapidly<br \/>\nso your system is usable, and then gather statistics twice more<br \/>\nwith increasing accuracy.  When it is done, your system will<br \/>\nhave the default level of optimizer statistics.<br \/>\nIf you have used ALTER TABLE to modify the statistics target for<br \/>\nany tables, you might want to remove them and restore them after<br \/>\nrunning this script because they will delay fast statistics generation.<br \/>\nIf you would like default statistics as quickly as possible, cancel<br \/>\nthis script and run:<br \/>\n\"\/u01\/app\/postgres\/product\/96\/db_2\/bin\/vacuumdb\" --all --analyze-only<br \/>\nvacuumdb: processing database \"zoulou\": Generating minimal optimizer statistics (1 target)<br \/>\nvacuumdb: processing database \"postgres\": Generating minimal optimizer statistics (1 target)<br \/>\nvacuumdb: processing database \"template1\": Generating minimal optimizer statistics (1 target)<br \/>\nvacuumdb: processing database \"zoulou\": Generating medium optimizer statistics (10 targets)<br \/>\nvacuumdb: processing database \"postgres\": Generating medium optimizer statistics (10 targets)<br \/>\nvacuumdb: processing database \"template1\": Generating medium optimizer statistics (10 targets)<br \/>\nvacuumdb: processing database \"zoulou\": Generating default (full) optimizer statistics<br \/>\nvacuumdb: processing database \"postgres\": Generating default (full) optimizer statistics<br \/>\nvacuumdb: processing database \"template1\": Generating default (full) optimizer statistics<br \/>\nDone<br \/>\n<\/code><\/p>\n<p>And the last step is to verify size and db version<br \/>\n<code><br \/>\nzoulou=# select version();<br \/>\nversion<br \/>\n------------------------------------------------------------------------------------------<br \/>\nPostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit<br \/>\n(1 row)<\/code><\/p>\n<p><code><br \/>\n<\/code><\/p>\n<p><code>zoulou=# select pg_size_pretty(pg_database_size('zoulou'));<br \/>\npg_size_pretty<br \/>\n----------------<br \/>\n9937 GB<br \/>\n(1 row)<br \/>\nzoulou=#<br \/>\n<\/code><\/p>\n<p>Here is the end of the history. Hope that this can help for future migration and upgrade.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Mouhamadou Diaw In this blog I am going share a history of PostgreSQL migration and upgrade from 9.2 to 9.6. Let me first explain the context We have a PostgreSQL environment with following characteristics. Note that real database name, server name are changed for security reason Host: CentOS release 6.4 PostgreSQL version: 9.2 Database size [&hellip;]<\/p>\n","protected":false},"author":27,"featured_media":10129,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[77,1109],"type_dbi":[],"class_list":["post-10128","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-database-administration-monitoring","tag-postgresql","tag-postgresql-upgrade"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>History of Upgrading 9 Tb PostgreSQL database - dbi Blog<\/title>\n<meta name=\"description\" content=\"PostgreSQL Upgrade, PostgreSQL\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"History of Upgrading 9 Tb PostgreSQL database\" \/>\n<meta property=\"og:description\" content=\"PostgreSQL Upgrade, PostgreSQL\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2017-05-25T10:34:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-08T14:34:26+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png\" \/>\n\t<meta property=\"og:image:width\" content=\"639\" \/>\n\t<meta property=\"og:image:height\" content=\"358\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Oracle Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Oracle Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/\"},\"author\":{\"name\":\"Oracle Team\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/66ab87129f2d357f09971bc7936a77ee\"},\"headline\":\"History of Upgrading 9 Tb PostgreSQL database\",\"datePublished\":\"2017-05-25T10:34:12+00:00\",\"dateModified\":\"2023-06-08T14:34:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/\"},\"wordCount\":593,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/schema.png\",\"keywords\":[\"PostgreSQL\",\"PostgreSQL Upgrade\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/\",\"name\":\"History of Upgrading 9 Tb PostgreSQL database - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/schema.png\",\"datePublished\":\"2017-05-25T10:34:12+00:00\",\"dateModified\":\"2023-06-08T14:34:26+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/66ab87129f2d357f09971bc7936a77ee\"},\"description\":\"PostgreSQL Upgrade, PostgreSQL\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/schema.png\",\"contentUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/schema.png\",\"width\":639,\"height\":358},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/history-of-upgrading-9-tb-postgresql-database\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"History of Upgrading 9 Tb PostgreSQL database\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/66ab87129f2d357f09971bc7936a77ee\",\"name\":\"Oracle Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"caption\":\"Oracle Team\"},\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/author\\\/oracle-team\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"History of Upgrading 9 Tb PostgreSQL database - dbi Blog","description":"PostgreSQL Upgrade, PostgreSQL","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/","og_locale":"en_US","og_type":"article","og_title":"History of Upgrading 9 Tb PostgreSQL database","og_description":"PostgreSQL Upgrade, PostgreSQL","og_url":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/","og_site_name":"dbi Blog","article_published_time":"2017-05-25T10:34:12+00:00","article_modified_time":"2023-06-08T14:34:26+00:00","og_image":[{"width":639,"height":358,"url":"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png","type":"image\/png"}],"author":"Oracle Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Oracle Team","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/"},"author":{"name":"Oracle Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"headline":"History of Upgrading 9 Tb PostgreSQL database","datePublished":"2017-05-25T10:34:12+00:00","dateModified":"2023-06-08T14:34:26+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/"},"wordCount":593,"commentCount":0,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png","keywords":["PostgreSQL","PostgreSQL Upgrade"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/","url":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/","name":"History of Upgrading 9 Tb PostgreSQL database - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png","datePublished":"2017-05-25T10:34:12+00:00","dateModified":"2023-06-08T14:34:26+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"description":"PostgreSQL Upgrade, PostgreSQL","breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/schema.png","width":639,"height":358},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/history-of-upgrading-9-tb-postgresql-database\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"History of Upgrading 9 Tb PostgreSQL database"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee","name":"Oracle Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","caption":"Oracle Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10128","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=10128"}],"version-history":[{"count":1,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10128\/revisions"}],"predecessor-version":[{"id":25698,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10128\/revisions\/25698"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media\/10129"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=10128"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=10128"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=10128"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=10128"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}