{"id":13467,"date":"2020-02-21T13:39:17","date_gmt":"2020-02-21T12:39:17","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/"},"modified":"2020-02-21T13:39:17","modified_gmt":"2020-02-21T12:39:17","slug":"speed-up-datapump-export-for-migrating-big-databases","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/","title":{"rendered":"Speed up datapump export for migrating big databases"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>Big Oracle databases (several TB) are still tough to migrate to another version on a new server. For most of them, you&#8217;ll probably use RMAN restore or Data Guard, but datapump is always a cleaner way to migrate. With datapump, you can easily migrate to a new filesystem (ASM for example), rethink your tablespace organization, reorganize all the segments, exclude unneeded components, etc. All of these tasks in one operation. But datapump export can take hours and hours to complete. This blog post describe a method I used on several projects: it helped me a lot to optimize migration time.<\/p>\n<h2>Why datapump export takes so much time?<\/h2>\n<p>First of all, exporting data with datapump is actually extracting all the objects from the database, so it&#8217;s easy to understand why it&#8217;s much slower than copying datafiles. Regarding datapump speed, it mainly depends on disk speed where datafiles reside, and parallelism level. Increasing parallelism does not always speed up export, simply because if you&#8217;re on mechanical disks, it&#8217;s slower to read multiple objects on the same disks than actually do it serially. So there is some kind of limit, and for big databases, it can last hours to export data. Another problem is that long lasting export needs more undo data. If your datapump export lasts 10 hours, you&#8217;ll need 10 hours of undo_retention (if you need a consistent dump &#8211; at least when testing the migration because application is running). You&#8217;re also risking DDL changes on the database, and undo_retention cannot do anything for that. Be carefull because uncomplete dump is totally usable to import data, but you&#8217;ll miss several objects, not the goal I presume.<\/p>\n<p>The solution would be trying to reduce the time needed for datapump export to avoid such problems.<\/p>\n<h2>SSD is the solution<\/h2>\n<p>SSD is probably the best choice for today&#8217;s databases. No more bottleneck with I\/Os, that&#8217;s all we were waiting for. But your source database, an old 11gR2 or 12cR1, probably doesn&#8217;t run on SSD, especially if it&#8217;s a big database. SSD were quite small and expensive several years ago. So what? You probably didn&#8217;t plan a SSD migration on source server as you will decommission it as soon as migration is finished. <\/p>\n<p>The solution is to use a temporary server fitted with fast SSDs. You don&#8217;t need a real server, with a fully rendundant configuration. You even don&#8217;t need RAID at all to protect your data because this server will only be for a single use: JBOD is OK.<\/p>\n<h2>How to configure this server?<\/h2>\n<p>This server will have:<\/p>\n<ul>\n<li>exactly the same OS, or something really similar compared to source server<\/li>\n<li>the exact same Oracle version<\/li>\n<li>the same configuration of the filesystems<\/li>\n<li>enough free space to restore the source database<\/li>\n<li>SSD-only storage for datafiles without redundancy<\/li>\n<li>enough cores to maximise the parallelism level<\/li>\n<li>a shared folder to put the dump, this shared folder would also be mounted on target server<\/li>\n<li>a shared folder to pick up the latest backups from source database<\/li>\n<li>enough bandwith for shared folders. 1Gbps network is only about 100MB\/s, so don&#8217;t expect very high speed with that kind of network<\/li>\n<li>you don&#8217;t need a listener<\/li>\n<li>you&#8217;ll never use this database for you application<\/li>\n<li>if you&#8217;re reusing a server, make sure it will be dedicated for this purpose (no other running processes)<\/li>\n<\/ul>\n<h2>And regarding the license?<\/h2>\n<p>As you may know this server would need a license. But you also know that during the migration project, you&#8217;ll have twice the license used on your environment for several weeks: still using old servers, and already using new servers for migrated database. To avoid any problem, you can use a server previously running Oracle databases and already decommissionned. Tweak it with SSDs and it will be fine. And please make sure to be fully compliant with the Oracle license on your target environment.<\/p>\n<h2>How to proceed?<\/h2>\n<p>We won&#8217;t use this server as a one-shot path for migration because we need to try if the method is good enough and also find the best settings for datapump.<\/p>\n<p>To proceed, the steps are:<\/p>\n<ul>\n<li>declare the database in \/etc\/oratab<\/li>\n<li>create a pfile on source server and copy it to $ORACLE_HOME\/dbs on the temporary server<\/li>\n<li>edit the parameters to disable references to source environnement, for example local and remote_listeners and Data Guard settings. The goal is to make sure starting this database will have no impact on production<\/li>\n<li>startup the instance on this pfile<\/li>\n<li>restore the controlfile from the very latest controlfile autobackup<\/li>\n<li>restore the database<\/li>\n<li>recover the database and check the SCN<\/li>\n<li>take a new archivelog backup on the source database (to simulate the real scenario)<\/li>\n<li>catalog the backup folder on the temporary database with RMAN<\/li>\n<li>do another recover database on temporary database, it should apply the archivelogs of the day, then check again the SCN<\/li>\n<li>open the database in resetlogs mode<\/li>\n<li>create the target directory for datapump on the database<\/li>\n<li>do the datapump export with maximum parallelism level (2 times the number of cores available on your server &#8211; it will be too many at the beginning, but not enough at the end). No need for flashback_scn here.<\/li>\n<\/ul>\n<p>You can try various parallelism levels to adjust to the best value. Once you&#8217;ve found the best value, you can schedule the real migration.<\/p>\n<h2>Production migration<\/h2>\n<p>Now you managed to master the method, let&#8217;s imagine that you planned to migrate to production tonight at 18:00.<\/p>\n<p>09:00 &#8211; have a cup of coffee first, you&#8217;ll need it!<br \/>\n09:15 &#8211; remove all the datafiles on the temporary server, also remove redologs and controlfiles, and empty the FRA. Only keep the pfile.<br \/>\n09:30 &#8211; startup force your temporary database, it should stop in nomount mode<br \/>\n09:45 &#8211; restore the latest controlfile autobackup on temporary database. Make sure no datafile will be added today on production<br \/>\n10:00 &#8211; restore the database on the temporary server. During the restore, production is still available on source server. At the end of the restore, do a first recover but DON&#8217;T open your database with resetlogs now<br \/>\n18:00 &#8211; your restore should be finished now, you can disconnect everyone from source database, and take the very latest archivelog backup on source database. From now your application should be down.<br \/>\n18:20 &#8211; on your temporary database, catalog the backup folder with RMAN. It will discover the latest archivelog backups.<br \/>\n18:30 &#8211; do a recover of your temporary database again. It should apply the latest archivelogs (generated during the day). If you want to make sure that everything is OK, check the current_scn on source database, it should be nearly the same as your temporary database<br \/>\n18:45 &#8211; open the temporary database with RESETLOGS<br \/>\n19:00 &#8211; do the datapump export with your optimal settings<\/p>\n<p>Once done, you now have to do the datapump import on your target database. Parallelism will depend on the cores available on target server, and the resources you would preserve for other databases already running on this server.<\/p>\n<h2>Benefits and drawbacks<\/h2>\n<p>Obvious benefit is that it probably costs less than 30 minutes to apply the archivelogs of the day on the temporary database. And total duration of the export can be cut by several hours.<\/p>\n<p>First drawback is that you&#8217;ll need a server of this kind, or you&#8217;ll need to build one. Second drawback is if you&#8217;re using Standard Edition: don&#8217;t expect to save that much hours as it has no parallelism at all. Big databases are not very well deserved by Standard Edition, you may know. <\/p>\n<h2>Real world example<\/h2>\n<p>This is a recent case. Source database is 12.1, about 2TB on mechanical disks. Datapump export is not working correctly: it lasted more than 19 hours with lots of errors. One of the big problem of this database is a bigfile tablespace of 1.8TB. Who did this kind of configuration?<\/p>\n<p>Temporary server is a DEV server already decommissioned running the same version of Oracle and using the same Linux kernel. This server is fitted with enough TB of SSD: mount path was changed to match source database filesystems.<\/p>\n<p>On source server:<br \/>\n<code><br \/>\nsu \u2013 oracle<br \/>\n. oraenv &lt;&lt;&lt; BP3<br \/>\nsqlplus \/ as sysdba<br \/>\ncreate pfile=&#039;\/tmp\/initBP3.ora&#039; from spfile;<br \/>\nexit<br \/>\nscp \/tmp\/initBP3.ora oracle@db32-test:\/tmp<br \/>\n<\/code><\/p>\n<p>On temporary server:<br \/>\n<code>su \u2013 oracle<br \/>\ncp \/tmp\/initBP3.ora \/opt\/orasapq\/oracle\/product\/12.1.0.2\/dbs\/<br \/>\necho \"BP3:\/opt\/orasapq\/oracle\/product\/12.1.0.2:N\" &gt;&gt; \/etc\/oratab<br \/>\n. oraenv &lt;&lt;&lt; BP3<br \/>\nvi $ORACLE_HOME\/dbs\/initBP3.ora<br \/>\n<em>remove db_unique_name, dg_broker_start, fal_server, local_listener, log_archive_config, log_archive_dest_2, log_archive_dest_state_2, service_names from this pfile<\/em><br \/>\nsqlplus \/ as sysdba<br \/>\nstartup force nomount;<br \/>\nexit<br \/>\nls -lrt \/backup\/db42-prod\/BP3\/autobackup | tail -n 1<br \/>\n\/backup\/db42-prod\/BP3\/autobackup\/c-2226533455-20200219-01<br \/>\nrman target \/<br \/>\nrestore controlfile from '\/backup\/db42-prod\/BP3\/autobackup\/c-2226533455-20200219-01';<br \/>\nalter database mount;<br \/>\nCONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET;<br \/>\nrestore database;<br \/>\n...<br \/>\nrecover database;<br \/>\nexit;<br \/>\n<\/code><\/p>\n<p>On source server:<br \/>\nTake a last backup of archivelogs with your own script: the one used in scheduled tasks.<\/p>\n<p>On temporary server:<br \/>\n<code>su \u2013 oracle<br \/>\n. oraenv &lt;&lt;&lt; BP3<br \/>\nrman target \/<br \/>\nselect current_scn from v$database;<br \/>\nCURRENT_SCN<br \/>\n-----------<br \/>\n11089172427<br \/>\ncatalog start with &#039;\/backup\/db42-prod\/BP3\/backupset\/&#039;;<br \/>\nrecover database;<br \/>\nselect current_scn from v$database;<br \/>\nCURRENT_SCN<br \/>\n-----------<br \/>\n11089175474<br \/>\nalter database open resetlogs;<br \/>\nexit;<br \/>\nsqlplus \/ as sysdba<br \/>\ncreate or replace directory mig as &#039;\/backup\/dumps\/&#039;;<br \/>\nexit<br \/>\nexpdp &#039;\/ as sysdba&#039;  full=y directory=migration dumpfile=expfull_BP3_`date +%Y%m%d_%H%M`_%U.dmp parallel=24 logfile=expfull_BP3_`date +%Y%m%d_%H%M`.log<br \/>\n<\/code><\/p>\n<p>Export was done in less than 5 hours, 4 times less than on source database. Database migration could now fit in one night. Much better isn&#8217;t it?<\/p>\n<h2>Other solutions<\/h2>\n<p>If you&#8217;re used to Data Guard, you can create a standby on this temporary server that would be dedicated to this purpose. No need to manually apply the latest archivelog backup of the day because it&#8217;s already in sync. Just convert this standby to primary without impacting the source database, or do a simple switchover then do the datapump export.<\/p>\n<p>Transportable tablespace is a mixed solution where datafiles are copied to destination database, only metadata being exported and imported. But don&#8217;t expect any kind of reorganization here.<\/p>\n<p>If you cannot afford a downtime of several hours of migration, you should think about logical replication. Solutions like Golden Gate are perfect for keeping application running. But as you probably know, it comes at a cost.<\/p>\n<h2>Conclusion<\/h2>\n<p>If several hours of downtime is acceptable, datapump is still a good option for migration. Downtime is all about disk speed and parallelism.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Big Oracle databases (several TB) are still tough to migrate to another version on a new server. For most of them, you&#8217;ll probably use RMAN restore or Data Guard, but datapump is always a cleaner way to migrate. With datapump, you can easily migrate to a new filesystem (ASM for example), rethink your tablespace [&hellip;]<\/p>\n","protected":false},"author":45,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229,198,199,59],"tags":[60,61,62,1836,1837,1212,15,467,1838,1839,1840],"type_dbi":[],"class_list":["post-13467","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","category-database-management","category-hardware-storage","category-oracle","tag-12c","tag-18c","tag-19c","tag-dump","tag-expdp","tag-export","tag-migration","tag-parallel","tag-slow-expdp","tag-speed-up-datapump","tag-speed-up-migration"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Speed up datapump export for migrating big databases - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Speed up datapump export for migrating big databases\" \/>\n<meta property=\"og:description\" content=\"Introduction Big Oracle databases (several TB) are still tough to migrate to another version on a new server. For most of them, you&#8217;ll probably use RMAN restore or Data Guard, but datapump is always a cleaner way to migrate. With datapump, you can easily migrate to a new filesystem (ASM for example), rethink your tablespace [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2020-02-21T12:39:17+00:00\" \/>\n<meta name=\"author\" content=\"J\u00e9r\u00f4me Dubar\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"J\u00e9r\u00f4me Dubar\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/\"},\"author\":{\"name\":\"J\u00e9r\u00f4me Dubar\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/0fb4bbf128b4cda2f96d662dec2baedd\"},\"headline\":\"Speed up datapump export for migrating big databases\",\"datePublished\":\"2020-02-21T12:39:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/\"},\"wordCount\":1571,\"commentCount\":0,\"keywords\":[\"12c\",\"18c\",\"19c\",\"dump\",\"expdp\",\"Export\",\"Migration\",\"parallel\",\"slow expdp\",\"speed up datapump\",\"speed up migration\"],\"articleSection\":[\"Database Administration &amp; Monitoring\",\"Database management\",\"Hardware &amp; Storage\",\"Oracle\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/\",\"name\":\"Speed up datapump export for migrating big databases - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2020-02-21T12:39:17+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/0fb4bbf128b4cda2f96d662dec2baedd\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Speed up datapump export for migrating big databases\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/0fb4bbf128b4cda2f96d662dec2baedd\",\"name\":\"J\u00e9r\u00f4me Dubar\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/efaa5a7def0aa4cdaf49a470fb4a7641a3ea6e378ae1455096a0933f99f46d6b?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/efaa5a7def0aa4cdaf49a470fb4a7641a3ea6e378ae1455096a0933f99f46d6b?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/efaa5a7def0aa4cdaf49a470fb4a7641a3ea6e378ae1455096a0933f99f46d6b?s=96&d=mm&r=g\",\"caption\":\"J\u00e9r\u00f4me Dubar\"},\"description\":\"J\u00e9r\u00f4me Dubar has more than 15 years of experience in the field of Information Technology. Ten years ago, he specialized in the Oracle Database technology. His expertise is focused on database architectures, high availability (RAC), disaster recovery (DataGuard), backups (RMAN), performance analysis and tuning (AWR\/statspack), migration, consolidation and appliances, especially ODA (his main projects during the last years). Prior to joining dbi services, J\u00e9r\u00f4me Dubar worked in a Franco-Belgian IT service company as Database team manager and main consultant for 7 years. He also worked for 5 years in a software editor company as technical consultant across France. He was also teaching Oracle Database lessons for 9 years. J\u00e9r\u00f4me Dubar holds a Computer Engineering degree from the Lille Sciences and Technologies university in northern France. His branch-related experience covers the public sector, retail, industry, banking, health, e-commerce and IT sectors.\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/jerome-dubar\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Speed up datapump export for migrating big databases - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/","og_locale":"en_US","og_type":"article","og_title":"Speed up datapump export for migrating big databases","og_description":"Introduction Big Oracle databases (several TB) are still tough to migrate to another version on a new server. For most of them, you&#8217;ll probably use RMAN restore or Data Guard, but datapump is always a cleaner way to migrate. With datapump, you can easily migrate to a new filesystem (ASM for example), rethink your tablespace [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/","og_site_name":"dbi Blog","article_published_time":"2020-02-21T12:39:17+00:00","author":"J\u00e9r\u00f4me Dubar","twitter_card":"summary_large_image","twitter_misc":{"Written by":"J\u00e9r\u00f4me Dubar","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/"},"author":{"name":"J\u00e9r\u00f4me Dubar","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/0fb4bbf128b4cda2f96d662dec2baedd"},"headline":"Speed up datapump export for migrating big databases","datePublished":"2020-02-21T12:39:17+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/"},"wordCount":1571,"commentCount":0,"keywords":["12c","18c","19c","dump","expdp","Export","Migration","parallel","slow expdp","speed up datapump","speed up migration"],"articleSection":["Database Administration &amp; Monitoring","Database management","Hardware &amp; Storage","Oracle"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/","url":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/","name":"Speed up datapump export for migrating big databases - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2020-02-21T12:39:17+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/0fb4bbf128b4cda2f96d662dec2baedd"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/speed-up-datapump-export-for-migrating-big-databases\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Speed up datapump export for migrating big databases"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/0fb4bbf128b4cda2f96d662dec2baedd","name":"J\u00e9r\u00f4me Dubar","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/efaa5a7def0aa4cdaf49a470fb4a7641a3ea6e378ae1455096a0933f99f46d6b?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/efaa5a7def0aa4cdaf49a470fb4a7641a3ea6e378ae1455096a0933f99f46d6b?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/efaa5a7def0aa4cdaf49a470fb4a7641a3ea6e378ae1455096a0933f99f46d6b?s=96&d=mm&r=g","caption":"J\u00e9r\u00f4me Dubar"},"description":"J\u00e9r\u00f4me Dubar has more than 15 years of experience in the field of Information Technology. Ten years ago, he specialized in the Oracle Database technology. His expertise is focused on database architectures, high availability (RAC), disaster recovery (DataGuard), backups (RMAN), performance analysis and tuning (AWR\/statspack), migration, consolidation and appliances, especially ODA (his main projects during the last years). Prior to joining dbi services, J\u00e9r\u00f4me Dubar worked in a Franco-Belgian IT service company as Database team manager and main consultant for 7 years. He also worked for 5 years in a software editor company as technical consultant across France. He was also teaching Oracle Database lessons for 9 years. J\u00e9r\u00f4me Dubar holds a Computer Engineering degree from the Lille Sciences and Technologies university in northern France. His branch-related experience covers the public sector, retail, industry, banking, health, e-commerce and IT sectors.","url":"https:\/\/www.dbi-services.com\/blog\/author\/jerome-dubar\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/13467","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/45"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=13467"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/13467\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=13467"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=13467"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=13467"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=13467"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}