{"id":10997,"date":"2018-02-27T15:59:16","date_gmt":"2018-02-27T14:59:16","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/"},"modified":"2018-02-27T15:59:16","modified_gmt":"2018-02-27T14:59:16","slug":"oda-migration-challenges","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/","title":{"rendered":"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4"},"content":{"rendered":"<p>To do some application and performances tests, I had to copy a database from a third party Linux server to an ODA X7-2M. Looks pretty simple on the paper, but 2 small challenges came into the game. The first was that of course the source database was in Non-OMF while ODA works fully in OMF. The second was that the source database is running 11.2.0.3 which is not supported and cannot be installed on the ODA &#8220;lite&#8221;. Therefore I had to find a way to copy the database on 11.2.0.4 binaries and get the upgrade done before opening it.<\/p>\n<p><!--more--><\/p>\n<p>My first idea was of course to do a duplicate of the source database to the ODA.\u00a0 To get everything ready on ODA side (folders, instance&#8230;), I simply created an 11.2.0.4 database using <em>ODACLI CREATE-DATABASE<\/em> and then shut it down to delete all data files, redo logs and control files.<\/p>\n<p>As duplicate from active database wasn&#8217;t possible, I checked the backup of the source database and looked for the best SCN to get to. Once I had defined this I could start preparing my duplicate as following:<\/p>\n<pre class=\"brush: shell; gutter: true; first-line: 1\">RMAN&gt; run {\n2&gt; set until scn XXXXXXXXX;\n3&gt; allocate channel t1 type disk;\n4&gt; allocate channel t2 type disk;\n5&gt; allocate channel t3 type disk;\n6&gt; allocate channel t4 type disk;\n7&gt; allocate auxiliary channel a1 type disk;\n8&gt; allocate auxiliary channel a2 type disk;\n9&gt; allocate auxiliary channel a3 type disk;\n10&gt; allocate auxiliary channel a4 type disk;\n11&gt; duplicate target database to 'DBTST1';\n12&gt; }<\/pre>\n<p>As explained above the first little challenge here was that my target database is in Non-OMF and I wanted to make it &#8220;proper&#8221; on ODA which means OMF based structure.<\/p>\n<p>Usually in a duplicate you would use <em>db_file_name_convert<\/em> and <em>log_file_name_convert<\/em> to change the path of the files. The issue with this solution is that it will not rename files except if you do it file per file.<\/p>\n<p>The second option is to use in RMAN the command <em>SET NEWNAME FOR DATAFILE<\/em>. Here same &#8220;issue&#8221; I have to do it file per file and I had more than 180 files. Of course I could easily script it with SQLPlus but the list would be awful and not easy to crosscheck\u00a0 if I&#8217;m missing anything. In addition doing the <em>SETNAME<\/em> requires to take some precaution as the file name still need to be OMF generated. This can be handled by providing followings string for the filename: 01_mf_&lt;dbname&gt;_%u.dbf<\/p>\n<p>However I still wanted a more &#8220;elegant&#8221; way. The solution indeed was simply to use <em>SET NEWNAME FOR DATABASE<\/em> in conjunction with the <em>TO NEW<\/em> option. This automatically generates a new file name for all database files. The condition there is that following parameters are properly set on the auxiliary database:<\/p>\n<ul>\n<li>db_create_file_dest<\/li>\n<li>db_create_online_log_dest_n<br \/>\nConfigure from 1 to 5 depending on the number of members you want per redo log group<\/li>\n<li>control_files<br \/>\nShould be reset as new control file(s) name(s) will be generated<\/li>\n<\/ul>\n<p>So I got finally the following RMAN script to run the duplicate:<\/p>\n<pre class=\"brush: shell; gutter: true; first-line: 1\">RMAN&gt; run {\n2&gt; set until scn XXXXXXXXX;\n3&gt; set newname for database to new;\n4&gt; allocate channel t1 type disk;\n5&gt; allocate channel t2 type disk;\n6&gt; allocate channel t3 type disk;\n7&gt; allocate channel t4 type disk;\n8&gt; allocate auxiliary channel a1 type disk;\n9&gt; allocate auxiliary channel a2 type disk;\n10&gt; allocate auxiliary channel a3 type disk;\n11&gt; allocate auxiliary channel a4 type disk;\n12&gt; duplicate target database to 'DBTEST1';\n13&gt; }<\/pre>\n<p>At this point I solved the Non-OMF to OMF conversion issue and almost got a copy of my database on the ODA.<\/p>\n<p>Why almost? Simply because the duplicate failed \ud83d\ude42<\/p>\n<p>Indeed this is fully &#8220;normal&#8221; and part of the process. As you know the last step in a duplicate is <em>ALTER CLONE DATABASE OPEN RESETLOGS<\/em> on the auxiliary database. However the database was still in 11.2.0.3 while the binaries on ODA are 11.2.0.4. The result was the duplicate crashing on last step as binaries are not compatible.<\/p>\n<p>This didn&#8217;t really matter as the restore and recover operation worked meaning that my database was on a consistent point in time. Unfortunately simply opening the database with <em>ALTER DATABASE OPEN RESETLOGS UPDATE<\/em> did not work claiming that the database need media recovery<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SQL&gt; alter database open resetlogs upgrade;\nalter database open resetlogs upgrade\n*\nERROR at line 1:\nORA-01194: file 1 needs more recovery to be consistent\nORA-01110: data file 1:\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/DBTST1_SITE1\/datafile\/o1_mf_system_f98\n0sv9j_.dbf'<\/pre>\n<p>My first idea here was to try a RECOVERY UNTIL CANCEL and then try again but I had nothing else than the last archive logs applied during the duplicate to recover \ud83d\ude41<\/p>\n<p>Another situation where you have to open a database with <em>RESETLOGS<\/em> is when you restored the control files. So I chose to re-create the control file with an SQL script.<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">CREATE CONTROLFILE REUSE DATABASE \"QUANTUMQ\" RESETLOGS ARCHIVELOG\nMAXLOGFILES 202\nMAXLOGMEMBERS 5\nMAXDATAFILES 200\nMAXINSTANCES 1\nMAXLOGHISTORY 33012\nLOGFILE\nGROUP 1 SIZE 1G BLOCKSIZE 512,\nGROUP 2 SIZE 1G BLOCKSIZE 512,\nGROUP 3 SIZE 1G BLOCKSIZE 512\nDATAFILE\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/DBTST1_SITE1\/datafile\/o1_mf_system_f980sv9j_.dbf',\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/QUANTUMQ_SITE1\/datafile\/o1_mf_sysaux_f97x0l8m_.dbf',\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/QUANTUMQ_SITE1\/datafile\/o1_mf_undotbs1_f97w67k2_.dbf',\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/QUANTUMQ_SITE1\/datafile\/o1_mf_users_f97w67md_.dbf',\n...\n...\nCHARACTER SET AL32UTF8;<\/pre>\n<p>The question\u00a0 here was where to find the different information for my script as the <em>BACKUP TO TRACE<\/em> does not work in <em>MOUNT<\/em> status?<\/p>\n<p>I used following statements<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SQL&gt; select type,RECORDS_TOTAL from v$controlfile_record_section;\n\nTYPE\t\t\t     RECORDS_TOTAL\n---------------------------- -------------\nDATABASE\t\t\t\t 1\t\t==&gt; MAXINSTANCE (is obvious as I'm in single instance :-) )\nCKPT PROGRESS\t\t\t\t11\nREDO THREAD\t\t\t\t 8\nREDO LOG\t\t\t       202\t\t==&gt; MAXREDOLOG\nDATAFILE\t\t\t       200\t\t==&gt; MAXDATAFILE\nFILENAME\t\t\t      3056\nTABLESPACE\t\t\t       200\nTEMPORARY FILENAME\t\t       200\nRMAN CONFIGURATION\t\t\t50\nLOG HISTORY\t\t\t     33012\t\t==&gt; MAXLOGHISTORY\nOFFLINE RANGE\t\t\t       245\n...\n...<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SQL&gt; select group#,members,bytes\/1024\/1024,blocksize from v$log;\n\n    GROUP#    MEMBERS BYTES\/1024\/1024  BLOCKSIZE\n---------- ---------- --------------- ----------\n\t 1\t    2\t\t 1024\t     512\n\t 2\t    2\t\t 1024\t     512\n\t 3\t    2\t\t 1024\t     512<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SQL&gt; select '''' || name || ''''||',' from v$datafile order by file# asc;\n\n''''||NAME||''''||','\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/DBTST1_SITE1\/datafile\/o1_mf_system_f980sv9j_.dbf',\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/DBTST1_SITE1\/datafile\/o1_mf_sysaux_f97x0l8m_.dbf',\n'\/u02\/app\/oracle\/oradata\/DBTST1_SITE1\/DBTST1_SITE1\/datafile\/o1_mf_undotbs1_f97w67k2_.dbf',\n...\n...<\/pre>\n<p>&nbsp;<\/p>\n<p>Once the control file was re-created the OPEN RESETLOGS was still failing with an ORA-01194. Hmm.. same issue.<br \/>\nThen I finally tried to recover the only files I had, the new empty redo logs<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SQL&gt; RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;\nORA-00279: change 20833391792 generated at 02\/26\/2018 14:36:55 needed for\nthread 1\nORA-00289: suggestion :\n\/u03\/app\/oracle\/fast_recovery_area\/DBTST1_SITE1\/archivelog\/2018_02_26\/o1_mf_1_\n1_%u_.arc\nORA-00280: change 20833391792 for thread 1 is in sequence #1\n\n\nSpecify log: {&lt;RET&gt;=suggested | filename | AUTO | CANCEL}\n\/u03\/app\/oracle\/redo\/DBTST1_SITE1\/onlinelog\/o1_mf_1_f983clsn_.log\nLog applied.\nMedia recovery complete.<\/pre>\n<p>Successful media recovery, great!<\/p>\n<p>Finally I got my database open in RESET LOG mode!<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SQL&gt; alter database open resetlogs upgrade;\n\nDatabase altered.<\/pre>\n<p>At this point I just had to follow the traditional upgrade process from 11.2.0.3 to 11.2.0.4. The last trap was to not forget creating the <em>TEMP<\/em> files for the tablespace.<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SQL&gt; alter tablespace TEMP add tempfile size 30G;\n\nTablespace altered.\n\nSQL&gt; alter tablespace TEMP add tempfile size 30G;\n\nTablespace altered.<\/pre>\n<p>&nbsp;<\/p>\n<p>Then the upgrade process is quite easy:<\/p>\n<ol>\n<li>Run <em>utlu112i.sql<\/em> as pre-upgrade script<\/li>\n<li>Run <em>catupgrd.sql<\/em> for the upgrade<\/li>\n<li>Restart the database<\/li>\n<li>Run <em>utlu112is.sql<\/em> as post-upgrade script and make sure no error is shown and all components are valid<\/li>\n<li>Run <em>catuppst.sql<\/em> to finalize the upgrade<\/li>\n<li>Run <em>utlrp.sql<\/em> to re-compile the invalid object<\/li>\n<\/ol>\n<p>Should you forget to add the temp files in the temporary tablespace you will get multiple errors <em>ORA-25153 &#8220;Temporay Tablespace Is Empty<\/em>&#8221; (see note <strong>843899.1<\/strong>). Basically the only thing to do in such a case is to add the temp files and re-run <em>catupgrd.sql<\/em><\/p>\n<p>Cheers!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>To do some application and performances tests, I had to copy a database from a third party Linux server to an ODA X7-2M. Looks pretty simple on the paper, but 2 small challenges came into the game. The first was that of course the source database was in Non-OMF while ODA works fully in OMF. [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[280,311,1046,73,15,79,1303,96,17,219],"type_dbi":[],"class_list":["post-10997","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","tag-database","tag-duplicate","tag-engineered-system","tag-linux","tag-migration","tag-oda","tag-omf","tag-oracle","tag-oracle-11g","tag-upgrade"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4 - dbi Blog<\/title>\n<meta name=\"description\" content=\"This articles describes how to migrate an Non-OMF 11.2.0.3 database to ODA as OMF based 11.2.0.4 database\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4\" \/>\n<meta property=\"og:description\" content=\"This articles describes how to migrate an Non-OMF 11.2.0.3 database to ODA as OMF based 11.2.0.4 database\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2018-02-27T14:59:16+00:00\" \/>\n<meta name=\"author\" content=\"David Hueber\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Hueber\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/\"},\"author\":{\"name\":\"David Hueber\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8873e20a98a02305870909f4e3d0088f\"},\"headline\":\"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4\",\"datePublished\":\"2018-02-27T14:59:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/\"},\"wordCount\":862,\"commentCount\":0,\"keywords\":[\"database\",\"duplicate\",\"Engineered system\",\"Linux\",\"Migration\",\"ODA\",\"OMF\",\"Oracle\",\"Oracle 11g\",\"Upgrade\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/\",\"name\":\"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4 - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2018-02-27T14:59:16+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8873e20a98a02305870909f4e3d0088f\"},\"description\":\"This articles describes how to migrate an Non-OMF 11.2.0.3 database to ODA as OMF based 11.2.0.4 database\",\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8873e20a98a02305870909f4e3d0088f\",\"name\":\"David Hueber\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/fc07284dbd5667f0bed32b0d8d64076ab885746973ea1b5c4e69c6fa7074cf59?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/fc07284dbd5667f0bed32b0d8d64076ab885746973ea1b5c4e69c6fa7074cf59?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/fc07284dbd5667f0bed32b0d8d64076ab885746973ea1b5c4e69c6fa7074cf59?s=96&d=mm&r=g\",\"caption\":\"David Hueber\"},\"description\":\"David Hueber has ten years of experience in infrastructure operation &amp; management, engineering, and optimization. He is specialized in Oracle technologies (engineering, backup and recovery, high availability, etc.), Service Management standards and Oracle infrastructure operation processes (Service Desk, Change Management, Capacity Planning, etc.). David Hueber is ITILv3 Service Operation Lifecycle certified and Linux LPIC-1 certified. He received a university degree in Informatics and Networks at the IUT Mulhouse, France. He also studied Information Systems at the Conservatoire National des Arts et M\u00e9tiers in Mulhouse, France. His branch-related experience covers Financial Services \/ Banking, Chemicals &amp; Pharmaceuticals, Transport &amp; Logistics, Retail, Food, etc.\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/david-hueber\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4 - dbi Blog","description":"This articles describes how to migrate an Non-OMF 11.2.0.3 database to ODA as OMF based 11.2.0.4 database","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/","og_locale":"en_US","og_type":"article","og_title":"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4","og_description":"This articles describes how to migrate an Non-OMF 11.2.0.3 database to ODA as OMF based 11.2.0.4 database","og_url":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/","og_site_name":"dbi Blog","article_published_time":"2018-02-27T14:59:16+00:00","author":"David Hueber","twitter_card":"summary_large_image","twitter_misc":{"Written by":"David Hueber","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/"},"author":{"name":"David Hueber","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8873e20a98a02305870909f4e3d0088f"},"headline":"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4","datePublished":"2018-02-27T14:59:16+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/"},"wordCount":862,"commentCount":0,"keywords":["database","duplicate","Engineered system","Linux","Migration","ODA","OMF","Oracle","Oracle 11g","Upgrade"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/","url":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/","name":"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4 - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2018-02-27T14:59:16+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8873e20a98a02305870909f4e3d0088f"},"description":"This articles describes how to migrate an Non-OMF 11.2.0.3 database to ODA as OMF based 11.2.0.4 database","breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/oda-migration-challenges\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"ODA migration challenges: Non-OMF to OMF + 11.2.0.3 to 11.2.0.4"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8873e20a98a02305870909f4e3d0088f","name":"David Hueber","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/fc07284dbd5667f0bed32b0d8d64076ab885746973ea1b5c4e69c6fa7074cf59?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/fc07284dbd5667f0bed32b0d8d64076ab885746973ea1b5c4e69c6fa7074cf59?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/fc07284dbd5667f0bed32b0d8d64076ab885746973ea1b5c4e69c6fa7074cf59?s=96&d=mm&r=g","caption":"David Hueber"},"description":"David Hueber has ten years of experience in infrastructure operation &amp; management, engineering, and optimization. He is specialized in Oracle technologies (engineering, backup and recovery, high availability, etc.), Service Management standards and Oracle infrastructure operation processes (Service Desk, Change Management, Capacity Planning, etc.). David Hueber is ITILv3 Service Operation Lifecycle certified and Linux LPIC-1 certified. He received a university degree in Informatics and Networks at the IUT Mulhouse, France. He also studied Information Systems at the Conservatoire National des Arts et M\u00e9tiers in Mulhouse, France. His branch-related experience covers Financial Services \/ Banking, Chemicals &amp; Pharmaceuticals, Transport &amp; Logistics, Retail, Food, etc.","url":"https:\/\/www.dbi-services.com\/blog\/author\/david-hueber\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10997","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=10997"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10997\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=10997"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=10997"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=10997"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=10997"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}