{"id":31108,"date":"2024-02-21T00:47:27","date_gmt":"2024-02-20T23:47:27","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=31108"},"modified":"2024-04-12T15:55:56","modified_gmt":"2024-04-12T13:55:56","slug":"logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/","title":{"rendered":"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)"},"content":{"rendered":"\n<p>A while ago I had been testing and blogging about ZDM, see my previous articles. And I finally had the chance to implement it at one of our customer to migrate on-premises database to Exadata Cloud @Customer. In this article I would like to share with you my experience migrating an on-premises database to ExaCC using ZDM Logical Offline Migration with a backup location. We intended to use this method, as mandatory one for small Oracle SE2 databases, and preferred one for huge Oracle SE2 databases.<\/p>\n\n\n<a class=\"wp-block-read-more\" href=\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\" target=\"_self\">Read more<span class=\"screen-reader-text\">: Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)<\/span><\/a>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-naming-convention\">Naming convention<\/h2>\n\n\n\n<p>Of course I have anonymised all outputs to remove customer infrastructure names. So let&#8217;s take following convention.<\/p>\n\n\n\n<p>ExaCC Cluster 01 node 01 : ExaCC-cl01n1<br>ExaCC Cluster 01 node 02 : ExaCC-cl01n2<br>On premises Source Host : vmonpr<br>Target db_unique_name on the ExaCC : ONPR_RZ2<br>Database Name to migrate : ONPR<br>ZDM Host : zdmhost<br>ZDM user : zdmuser<br>Domain : domain.com<br>ExaCC PDB to migrate to : ONPRZ_APP_001T<\/p>\n\n\n\n<p>We will then migrate on-premise Single-Tenant database, named ONPR, to a PDB on the ExaCC. The PDB will be named ONPRZ_APP_001T.<\/p>\n\n\n\n<p>We will migrate 3 schemas : USER1, USER2 and USER3<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-ports\">Ports<\/h2>\n\n\n\n<p>It is important to mention that following ports are needed:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\"><strong>Source<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Destination<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Port<\/strong><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">ZDM Host<\/td><td class=\"has-text-align-center\" data-align=\"center\">On-premise Host<\/td><td class=\"has-text-align-center\" data-align=\"center\">22<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">ZDM Host<\/td><td class=\"has-text-align-center\" data-align=\"center\">On-premise Host<\/td><td class=\"has-text-align-center\" data-align=\"center\">Oracle Net<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">ZDM Host<\/td><td class=\"has-text-align-center\" data-align=\"center\">ExaCC VM (both nodes)<\/td><td class=\"has-text-align-center\" data-align=\"center\">22<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">ZDM Host<\/td><td class=\"has-text-align-center\" data-align=\"center\">ExaCC (scan + VIP)<\/td><td class=\"has-text-align-center\" data-align=\"center\">Oracle Net<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">On-premise Host<\/td><td class=\"has-text-align-center\" data-align=\"center\">NFS Server<\/td><td class=\"has-text-align-center\" data-align=\"center\">111<br>2049<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">ExaCC<\/td><td class=\"has-text-align-center\" data-align=\"center\">NFS Server<\/td><td class=\"has-text-align-center\" data-align=\"center\">111<br>2049<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>If Oracle Net ports are for example not opened between ZDM Host and ExaCC, the migration evaluation will immediately stopped at first steps named ZDM_VALIDATE_TGT, and following errors will be found in the log file:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>PRGZ-3181 : Internal error: ValidateTargetDbLogicalZdm-5-PRGD-1059 : query to retrieve NLS database parameters failed\nPRGD-1002 : SELECT statement \"SELECT * FROM GLOBAL_NAME\" execution as user \"system\" failed for database with Java Database Connectivity (JDBC) URL \"jdbc:oracle:thin:@(description=(address=(protocol=tcp)(port=1521)(host=ExaCC-cl01-scan.domain.com))(connect_data=(service_name=ONPRZ_APP_001T_PRI.domain.com)))\"\nIO Error: The Network Adapter could not establish the connection (CONNECTION_ID=9\/tZ9Bt5Q5q5VfqU7JC\/xA==)<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-requirements\">Requirements<\/h2>\n\n\n\n<p>There is a few requirements that are needed<\/p>\n\n\n\n<h4>streams_pool_size instance parameter on the source database<\/h4>\n\n\n\n<p>To have an initial pool allocated and optimal Data Pump performance, source DB instance parameter needs to be set to minimal 256-300 MB for Logical Offline Migration.<\/p>\n\n\n\n<\/br>\n\n\n\n<h4>Passwordless Login<\/h4>\n\n\n\n<p>Passwordless Login needs to be configured between ZDM Host, the Source Host and Target Host. See my previous blog : <a href=\"https:\/\/www.dbi-services.com\/blog\/oracle-zdm-migration-java-security-invalidkeyexception-invalid-key-format\/\">https:\/\/www.dbi-services.com\/blog\/oracle-zdm-migration-java-security-invalidkeyexception-invalid-key-format\/<\/a><\/p>\n\n\n\n<p>If Passwordless Login is not configured with one node, you will see such error in the log file during migration evaluation:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>PRCZ-2006 : Unable to establish SSH connection to node \"ExaCC-cl01n2\" to execute command \"&lt;command_to_be_executed&gt;\"\nNo more authentication methods available.<\/code><\/pre>\n\n\n\n<\/br>\n\n\n\n<h4>Database Character Set<\/h4>\n\n\n\n<p>ExaCC target CDB should be in the same character set as on-premise source db. If the final CDB where you would like to host your new PDB has got a character set of AL32UTF8 for example&nbsp;(so this CDB can host various PDB character set) and your source DB is not in AL32UTF8 you will need to go through a temporary CDB on the ExaCC before relocating the PDB to the final one.<\/p>\n\n\n\n<p>To check the character set, run following statement on the on-premise source DB:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SQL&gt; select parameter, value from v$nls_parameters where parameter='NLS_CHARACTERSET';<\/code><\/pre>\n\n\n\n<p>If your ExaCC target CDB character set (here as example AL32UTF8) does not match the on-premise source DB character set (here as example WE8ISO8859P1), you will get following ZDM error during the evaluation of the migration:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>PRGZ-3549 : Source NLS character set WE8ISO8859P1 is different from target NLS character set AL32UTF8.<\/code><\/pre>\n\n\n\n<\/br>\n\n\n\n<h4>Create PDB on the ExaCC<\/h4>\n\n\n\n<p>Final PDB will have to be created in one of the ExaCC container database according to the character set of the source database.<\/p>\n\n\n\n<\/br>\n\n\n\n<h4>Create NFS directory<\/h4>\n\n\n\n<p>NFS directory and Oracle directories need to be setup to store Oracle dump file created automatically by ZDM. We will create the file system directory on the NFS Mount point and a new Oracle Directory named MIG_SOURCE_DEST in both databases (source and target). NFS directory should be accessible and shared between&nbsp;both environments.<\/p>\n\n\n\n<p>If you do not have any shared NFS between source and target, you will get following kind of errors when evaluating the migration:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>zdmhost: 2024-02-06T14:14:17.001Z : Executing phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT\nzdmhost: 2024-02-06T14:14:19.583Z : validating Oracle Data Pump dump directory \/u02\/app\/oracle\/product\/19.0.0.0\/dbhome_2\/rdbms\/log\/10B7A59DF2E82A9AE063021FA10ABD38 ...\nzdmhost: 2024-02-06T14:14:19.587Z : listing directory path \/u02\/app\/oracle\/product\/19.0.0.0\/dbhome_2\/rdbms\/log\/10B7A59DF2E82A9AE063021FA10ABD38 on node ExaCC-cl01n1.domain.com ...\nPRGZ-1211 : failed to validate specified database directory object path \"\/u02\/app\/oracle\/product\/19.0.0.0\/dbhome_2\/rdbms\/log\/10B7A59DF2E82A9AE063021FA10ABD38\"\nPRGZ-1420 : specified database import directory object path \"\/u02\/app\/oracle\/product\/19.0.0.0\/dbhome_2\/rdbms\/log\/10B7A59DF2E82A9AE063021FA10ABD38\" is not shared between source and target database server<\/code><\/pre>\n\n\n\n<p>After having created the directory on the shared NFS, directory which will be shared on both the source and the target, you will need to create (or use an existing one) an oracle directory. I decided to create a new one, named MIG_SOURCE_DEST. The following will have to be run on both the source and the target databases.<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1,5]\">\nSQL&gt; create directory MIG_SOURCE_DEST as '\/mnt\/nfs_share\/ONPR\/';\n\nDirectory created.\n\nSQL&gt; select directory_name, directory_path from dba_directories where upper(directory_name) like '%MIG%';\n\nDIRECTORY_NAME                 DIRECTORY_PATH\n------------------------------ ------------------------------------------------------------\nMIG_SOURCE_DEST                \/mnt\/nfs_share\/ONPR\/\n<\/pre>\n<\/br>\n\n\n\n<p>You will also need to set correct permissions to the folder knowing that ExaCC OS user might not have the same id than the Source Host OS user.<\/p>\n\n\n\n<\/br>\n\n\n\n<h4>Source user password version<\/h4>\n\n\n\n<p>It is mandatory that the password for all user schemas been migrated is in at least 12c versions. For old password version like 10G or 11G, password for user needs to be change to avoid additional troubleshooting and actions with ZDM migration.<\/p>\n\n\n\n<p>To check user password version on the source, run following SQL statement:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SQL&gt; select username, account_status, lock_date, password_versions from dba_users where ORACLE_MAINTAINED='N';<\/code><\/pre>\n\n\n\n<\/br>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-prepare-zdm-response-file\"><a>Prepare ZDM response file<\/a><\/h2>\n\n\n\n<p>We will use ZDM response file template named zdm_logical_template.rsp and adapt it.<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost migration]$ cp \/u01\/app\/oracle\/product\/zdm\/rhp\/zdm\/template\/zdm_logical_template.rsp .\/zdm_ONPR_logical_offline.rsp\n<\/pre>\n<\/br>\n\n\n\n<p>The main parameters to take care of are :<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table style=\", Courier, monospace;font-size:9pt\"><tbody>\n<tr><td class=\"has-text-align-center\" data-align=\"center\"><strong>Parameter<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Explanation<\/strong><\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\"><span lang=\"EN-GB\" style=\"text-align: start;font-size: 9pt;, serif\">DATA_TRANSFER_MEDIUM<\/span><span style=\"font-family: -webkit-standard;font-size: medium;text-align: start\"><\/span><\/td><td class=\"has-text-align-center\" data-align=\"center\">Specifies how data will be transferred from the source database system to the target database system.<br>To be NFS<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">TARGETDATABASE_ADMINUSERNAME<\/td><td class=\"has-text-align-center\" data-align=\"center\">User to be used on the target for the migration.<br>To be SYSTEM<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">SOURCEDATABASE_ADMINUSERNAME<\/td><td class=\"has-text-align-center\" data-align=\"center\">User to be used on the source for the migration.<br>To be SYSTEM<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">SOURCEDATABASE_CONNECTIONDETAILS_HOST<\/td><td class=\"has-text-align-center\" data-align=\"center\">Source listener host<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">SOURCEDATABASE_CONNECTIONDETAILS_PORT<\/td><td class=\"has-text-align-center\" data-align=\"center\">Source listener port.<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME<\/td><td class=\"has-text-align-center\" data-align=\"center\">Source database service name<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">TARGETDATABASE_CONNECTIONDETAILS_HOST<\/td><td class=\"has-text-align-center\" data-align=\"center\">Target listener host (on ExaCC scan listener)<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">TARGETDATABASE_CONNECTIONDETAILS_PORT<\/td><td class=\"has-text-align-center\" data-align=\"center\">Target listener port.<br>To be 1521<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME<\/td><td class=\"has-text-align-center\" data-align=\"center\">Target database service name<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">TARGETDATABASE_DBTYPE<\/td><td class=\"has-text-align-center\" data-align=\"center\">Target environment<br>To be EXADATA<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_SCHEMABATCH-1<\/td><td class=\"has-text-align-center\" data-align=\"center\">Comma separated list of Database schemas to be migrated<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_SCHEMABATCHCOUNT<\/td><td class=\"has-text-align-center\" data-align=\"center\">Exclusive with schemaBatch option. If specified, user schemas are identified<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE<\/td><td class=\"has-text-align-center\" data-align=\"center\">Maximum number of worker processes that can be used for a Data Pump Import job.<br>Value should not be set for SE2.<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE<\/td><td class=\"has-text-align-center\" data-align=\"center\">Maximum number of worker processes that can be used for a Data Pump Export job.<br>Value should not be set for SE2.<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST<\/td><td class=\"has-text-align-center\" data-align=\"center\">Specifies a comma separated list of object types to exclude<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME<\/td><td class=\"has-text-align-center\" data-align=\"center\">Oracle DBA directory that was created on the source for the export<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH<\/td><td class=\"has-text-align-center\" data-align=\"center\">NFS directory for dump that is used for export<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME<\/td><td class=\"has-text-align-center\" data-align=\"center\">Oracle DBA directory that was created on the source for the import<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH<\/td><td class=\"has-text-align-center\" data-align=\"center\">NFS directory for dump that is used for import<\/td><\/tr>\n<tr><td class=\"has-text-align-center\" data-align=\"center\">TABLESPACEDETAILS_AUTOCREATE<\/td><td class=\"has-text-align-center\" data-align=\"center\">If set to TRUE, ZDM will automatically create the tablespaces<br>To be TRUE<\/td><\/tr>\n<\/tbody><\/table><\/figure>\n\n\n\n<p>Updated ZDM response file compared to ZDM template for the migration we are going to run:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost migration]$ diff zdm_ONPR_logical_offline.rsp \/u01\/app\/oracle\/product\/zdm\/rhp\/zdm\/template\/zdm_logical_template.rsp\n30c30\n&lt; DATA_TRANSFER_MEDIUM=NFS\n---\n&gt; DATA_TRANSFER_MEDIUM=OSS\n47c47\n&lt; TARGETDATABASE_ADMINUSERNAME=system\n---\n&gt; TARGETDATABASE_ADMINUSERNAME=\n63c63\n&lt; SOURCEDATABASE_ADMINUSERNAME=system\n---\n&gt; SOURCEDATABASE_ADMINUSERNAME=\n80c80\n&lt; SOURCEDATABASE_CONNECTIONDETAILS_HOST=vmonpr\n---\n&gt; SOURCEDATABASE_CONNECTIONDETAILS_HOST=\n90c90\n&lt; SOURCEDATABASE_CONNECTIONDETAILS_PORT=13000\n---\n&gt; SOURCEDATABASE_CONNECTIONDETAILS_PORT=\n102c102\n&lt; SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=ONPR.domain.com\n---\n&gt; SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=\n153c153\n&lt; TARGETDATABASE_CONNECTIONDETAILS_HOST=ExaCC-cl01-scan.domain.com\n---\n&gt; TARGETDATABASE_CONNECTIONDETAILS_HOST=\n163c163\n&lt; TARGETDATABASE_CONNECTIONDETAILS_PORT=1521\n---\n&gt; TARGETDATABASE_CONNECTIONDETAILS_PORT=\n175c175\n&lt; TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=ONPRZ_APP_001T_PRI.domain.com\n---\n&gt; TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=\n307c307\n&lt; TARGETDATABASE_DBTYPE=EXADATA\n---\n&gt; TARGETDATABASE_DBTYPE=\n726c726\n&lt; DATAPUMPSETTINGS_SCHEMABATCH-1=USER1,USER2,USER3\n---\n&gt; DATAPUMPSETTINGS_SCHEMABATCH-1=\n947c947\n&lt; DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=1\n---\n&gt; DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=\n957c957\n&lt; DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=1\n---\n&gt; DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=\n969c969\n&lt; DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST=STATISTICS\n---\n&gt; DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST=\n1137c1137\n&lt; DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MIG_SOURCE_DEST\n---\n&gt; DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=\n1146c1146\n&lt; DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=\/mnt\/nfs_share\/ONPR\n---\n&gt; DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=\n1166c1166\n&lt; DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=MIG_SOURCE_DEST\n---\n&gt; DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=\n1175c1175\n&lt; DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=\/mnt\/nfs_nfs_share\/ONPR\n---\n&gt; DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=\n2146c2146\n&lt; TABLESPACEDETAILS_AUTOCREATE=TRUE\n---\n&gt; TABLESPACEDETAILS_AUTOCREATE=\n<\/pre>\n<\/br>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-zdm-build-version\">ZDM Build Version<\/h2>\n\n\n\n<p>I&#8217;m using ZDM build 21.4.<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli -build\nversion: 21.0.0.0.0\nfull version: 21.4.0.0.0\npatch version: 21.4.1.0.0\nlabel date: 221207.25\nZDM kit build date: Jul 31 2023 14:24:25 UTC\nCPAT build version: 23.7.0\n<\/pre>\n<\/br>\n\n\n\n<p>The migration will be done using ZDM CLI (zdmcli), which run migration through jobs. We can abort, query, modify, suspend or resume a running job.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-evaluate-the-migration\">Evaluate the migration<\/h2>\n\n\n\n<p>We will first run zdmcli with the -eval option to evaluate the migration and test if all is ok.<\/p>\n\n\n\n<p>We need to provide some arguments :<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table style=\", Courier, monospace;font-size:9pt\"><tbody>\n    <tr><td class=\"has-text-align-center\" data-align=\"center\"><strong>Argument<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Value<\/strong><\/td><\/tr>\n    <tr><td class=\"has-text-align-center\" data-align=\"center\">-sourcesid<\/td><td class=\"has-text-align-center\" data-align=\"center\">Database Name of the source database in case the source database is a single instance deployed on a non Grid Infrastructure environment<\/td><\/tr>\n    <tr><td class=\"has-text-align-center\" data-align=\"center\">-rsp<\/td><td class=\"has-text-align-center\" data-align=\"center\">ZDM response file<\/td><\/tr>\n    <tr><td class=\"has-text-align-center\" data-align=\"center\">-sourcenode<\/td><td class=\"has-text-align-center\" data-align=\"center\">Source host<\/td><\/tr>\n    <tr><td class=\"has-text-align-center\" data-align=\"center\">-srcauth with 3 sub-arguments:<br>\n-srcarg1<br>\n-srcarg2<br>\n-srcarg3\n<\/td><td class=\"has-text-align-center\" data-align=\"center\">Name of the source authentication plug-in with 3 sub-arguments:<br>\n1st argument: user. Should be oracle<br>\n2nd argument: ZDM private RSA Key<br>\n3rd argument: sudo location\n<\/td><\/tr>\n    <tr><td class=\"has-text-align-center\" data-align=\"center\">-targetnode<\/td><td class=\"has-text-align-center\" data-align=\"center\">Target host<\/td><\/tr>\n    <tr><td class=\"has-text-align-center\" data-align=\"center\">-tgtauth with 3 sub-arguments:<br>\n        -tgtarg1<br>\n        -tgtarg2<br>\n        -tgtarg3\n        <\/td><td class=\"has-text-align-center\" data-align=\"center\">Name of the target authentication plug-in with 3 sub-arguments:<br>\n            1st argument: user. Should be opc<br>\n            2nd argument: ZDM private RSA Key<br>\n            3rd argument: sudo location\n            <\/td><\/tr>\n<\/tbody><\/table><\/figure>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1,8,13,16]\">\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli migrate database -sourcesid ONPR -rsp \/home\/zdmuser\/migration\/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -srcarg3 sudo_location:\/usr\/bin\/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -tgtarg3 sudo_location:\/usr\/bin\/sudo -eval\nzdmhost.domain.com: Audit ID: 194\nEnter source database administrative user \"system\" password:\nEnter target database administrative user \"system\" password:\nOperation \"zdmcli migrate database\" scheduled with the job ID \"27\".\n[zdmuser@zdmhost migration]$\n\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli query job -jobid 27\nzdmhost.domain.com: Audit ID: 197\nJob ID: 27\nUser: zdmuser\nClient: zdmhost\nJob Type: \"EVAL\"\nScheduled job command: \"zdmcli migrate database -sourcesid ONPR -rsp \/home\/zdmuser\/migration\/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -srcarg3 sudo_location:\/usr\/bin\/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -tgtarg3 sudo_location:\/usr\/bin\/sudo -eval\"\nScheduled job execution start time: 2024-02-06T16:03:49+01. Equivalent local time: 2024-02-06 16:03:49\nCurrent status: SUCCEEDED\nResult file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-27-2024-02-06-16:04:01.log\"\nMetrics file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-27-2024-02-06-16:04:01.json\"\nExcluded objects file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-27-filtered-objects-2024-02-06T16:04:13.522.json\"\nJob execution start time: 2024-02-06 16:04:01\nJob execution end time: 2024-02-06 16:05:55\nJob execution elapsed time: 1 minutes 54 seconds\nZDM_VALIDATE_TGT ...................... COMPLETED\nZDM_VALIDATE_SRC ...................... COMPLETED\nZDM_SETUP_SRC ......................... COMPLETED\nZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED\nZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED\nZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED\nZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED\nZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED\nZDM_CLEANUP_SRC ....................... COMPLETED\n<\/pre>\n<\/br>\n\n\n\n<p>We can see that the Job Type is EVAL, and that the Current Status is SUCCEEDED with all prechecks steps having a COMPLETED status.<\/p>\n\n\n\n<p>We can also review the log file which will provide us more information. We will see all the checks that the tool is doing. We can also review the output of the advisor which is already warning us about old password for users. Reviewing all the advisor outputs might help. We can also see that ZDM will ignore as non critical a few ORA errors. This make senses because the migration should still happen even if the user is already created with empty objects for example.<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost ~]$ cat \/u01\/app\/oracle\/chkbase\/scheduled\/job-27-2024-02-06-16:04:01.log\nzdmhost: 2024-02-06T15:04:01.505Z : Starting zero downtime migrate operation ...\nzdmhost: 2024-02-06T15:04:01.511Z : Executing phase ZDM_VALIDATE_TGT\nzdmhost: 2024-02-06T15:04:04.952Z : Fetching details of on premises Exadata Database \"ONPRZ_APP_001T_PRI.domain.com\"\nzdmhost: 2024-02-06T15:04:04.953Z : Type of database : \"Exadata at Customer\"\nzdmhost: 2024-02-06T15:04:05.014Z : Verifying configuration and status of target database \"ONPRZ_APP_001T_PRI.domain.com\"\nzdmhost: 2024-02-06T15:04:09.067Z : Global database name: ONPRZ_APP_001T.DOMAIN.COM\nzdmhost: 2024-02-06T15:04:09.067Z : Target PDB name : ONPRZ_APP_001T\nzdmhost: 2024-02-06T15:04:09.068Z : Database major version : 19\nzdmhost: 2024-02-06T15:04:09.069Z : obtaining database ONPRZ_APP_001T.DOMAIN.COM tablespace configuration details...\nzdmhost: 2024-02-06T15:04:09.585Z : Execution of phase ZDM_VALIDATE_TGT completed\nzdmhost: 2024-02-06T15:04:09.670Z : Executing phase ZDM_VALIDATE_SRC\nzdmhost: 2024-02-06T15:04:09.736Z : Verifying configuration and status of source database \"ONPR.domain.com\"\nzdmhost: 2024-02-06T15:04:09.737Z : source database host vmonpr service ONPR.domain.com\nzdmhost: 2024-02-06T15:04:13.464Z : Global database name: ONPR.DOMAIN.COM\nzdmhost: 2024-02-06T15:04:13.465Z : Database major version : 19\nzdmhost: 2024-02-06T15:04:13.466Z : Validating database time zone compatibility...\nzdmhost: 2024-02-06T15:04:13.521Z : Database objects which will be migrated : [USER2, USER3]\nzdmhost: 2024-02-06T15:04:13.530Z : Execution of phase ZDM_VALIDATE_SRC completed\nzdmhost: 2024-02-06T15:04:13.554Z : Executing phase ZDM_SETUP_SRC\nzdmhost: 2024-02-06T15:05:04.925Z : Execution of phase ZDM_SETUP_SRC completed\nzdmhost: 2024-02-06T15:05:04.944Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR\nzdmhost: 2024-02-06T15:05:05.371Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node vmonpr ...\nzdmhost: 2024-02-06T15:05:07.894Z : Premigration advisor output:\nCloud Premigration Advisor Tool Version 23.7.0\nCPAT-4007: Warning: the build date for this version of the Cloud Premigration Advisor Tool is over 216 days.  Please run \"premigration.sh --updatecheck\" to see if a more recent version of this tool is available.\nPlease download the latest available version of the CPAT application.\n\nCloud Premigration Advisor Tool completed with overall result: Review Required\nCloud Premigration Advisor Tool generated report location: \/u00\/app\/oracle\/zdm\/zdm_ONPR_27\/out\/premigration_advisor_report.json\nCloud Premigration Advisor Tool generated report location: \/u00\/app\/oracle\/zdm\/zdm_ONPR_27\/out\/premigration_advisor_report.txt\n\n CPAT exit code: 2\n RESULT: Review Required\n\nSchemas Analyzed (2): USER3,USER2\nA total of 17 checks were performed\nThere were 0 checks with Failed results\nThere were 0 checks with Action Required results\nThere were 2 checks with Review Required results: has_noexport_object_grants (8 relevant objects), has_users_with_10g_password_version (1 relevant objects)\nThere were 0 checks with Review Suggested results has_noexport_object_grants\n         RESULT: Review Required\n         DESCRIPTION: Not all object grants are exported by Data Pump.\n         ACTION: Recreate any required grants on the target instance.  See Oracle Support Document ID 1911151.1 for more information. Note that any SELECT grants on system objects will need to be replaced with READ grants; SELECT is no longer allowed on system objects.\nhas_users_with_10g_password_version\n         RESULT: Review Required\n         DESCRIPTION: Case-sensitive passwords are required on ADB.\n         ACTION: To avoid Data Pump migration warnings change the passwords for the listed users before migration. Alternatively, modify these users passwords after migration to avoid login failures. See Oracle Support Document ID 2289453.1 for more information.\n\nzdmhost: 2024-02-06T15:05:07.894Z : Execution of phase ZDM_PRE_MIGRATION_ADVISOR completed\nzdmhost: 2024-02-06T15:05:07.948Z : Executing phase ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC\nzdmhost: 2024-02-06T15:05:08.545Z : validating Oracle Data Pump dump directory \/mnt\/nfs_share\/ONPR\/ ...\nzdmhost: 2024-02-06T15:05:08.545Z : validating Data Pump dump directory path \/mnt\/nfs_share\/ONPR\/ on node vmonpr.domain.com ...\nzdmhost: 2024-02-06T15:05:08.975Z : validating if target database user can read files shared on medium NFS\nzdmhost: 2024-02-06T15:05:08.976Z : setting Data Pump dump file permission at source node...\nzdmhost: 2024-02-06T15:05:08.977Z : changing group of Data Pump dump files in directory path \/mnt\/nfs_share\/ONPR\/ on node vmonpr.domain.com ...\nzdmhost: 2024-02-06T15:05:09.958Z : Execution of phase ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC completed\nzdmhost: 2024-02-06T15:05:10.005Z : Executing phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT\nzdmhost: 2024-02-06T15:05:13.307Z : validating Oracle Data Pump dump directory \/mnt\/nfs_nfs_share\/ONPR ...\nzdmhost: 2024-02-06T15:05:13.308Z : listing directory path \/mnt\/nfs_nfs_share\/ONPR on node ExaCC-cl01n1.domain.com ...\nzdmhost: 2024-02-06T15:05:14.008Z : Execution of phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT completed\nzdmhost: 2024-02-06T15:05:14.029Z : Executing phase ZDM_PREPARE_DATAPUMP_SRC\nzdmhost: 2024-02-06T15:05:14.033Z : Execution of phase ZDM_PREPARE_DATAPUMP_SRC completed\nzdmhost: 2024-02-06T15:05:14.058Z : Executing phase ZDM_DATAPUMP_ESTIMATE_SRC\nzdmhost: 2024-02-06T15:05:14.059Z : starting Data Pump Dump estimate for database \"ONPR.DOMAIN.COM\"\nzdmhost: 2024-02-06T15:05:14.060Z : running Oracle Data Pump job \"ZDM_27_DP_ESTIMATE_6279\" for database \"ONPR.DOMAIN.COM\"\nzdmhost: 2024-02-06T15:05:14.071Z : applying Data Pump dump compression ALL algorithm MEDIUM\nzdmhost: 2024-02-06T15:05:14.135Z : applying Data Pump dump encryption ALL algorithm AES128\nzdmhost: 2024-02-06T15:05:14.135Z : Oracle Data Pump Export parallelism set to 1 ...\nzdmhost: 2024-02-06T15:05:14.286Z : Oracle Data Pump errors to be ignored are ORA-31684,ORA-39111,ORA-39082...\nzdmhost: 2024-02-06T15:05:23.515Z : Oracle Data Pump log located at \/mnt\/nfs_share\/ONPR\/\/ZDM_27_DP_ESTIMATE_6279.log in the Database Server node\nzdmhost: 2024-02-06T15:05:53.643Z : Total estimation using BLOCKS method: 3.112 GB\nzdmhost: 2024-02-06T15:05:53.644Z : Execution of phase ZDM_DATAPUMP_ESTIMATE_SRC completed\nzdmhost: 2024-02-06T15:05:53.721Z : Executing phase ZDM_CLEANUP_SRC\nzdmhost: 2024-02-06T15:05:54.261Z : Cleaning up ZDM on the source node vmonpr ...\nzdmhost: 2024-02-06T15:05:55.506Z : Execution of phase ZDM_CLEANUP_SRC completed\n<\/pre>\n<\/br>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-migrate-source-database-to-exacc\">Migrate Source database to ExaCC<\/h2>\n\n\n\n<p>Once the evaluation is all good, we can move forward with running the migration. It is exactly the same zdmcli command without the option -eval.<\/p>\n\n\n\n<p>Let&#8217;s have a try and run it. We will have to provide both source and target system password:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli migrate database -sourcesid ONPR -rsp \/home\/zdmuser\/migration\/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -srcarg3 sudo_location:\/usr\/bin\/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -tgtarg3 sudo_location:\/usr\/bin\/sudo\nzdmhost.domain.com: Audit ID: 205\nEnter source database administrative user \"system\" password:\nEnter target database administrative user \"system\" password:\nOperation \"zdmcli migrate database\" scheduled with the job ID \"29\".\n<\/pre>\n<\/br>\n\n\n\n<p>We will query the job:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1,25]\">\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli query job -jobid 29\nzdmhost.domain.com: Audit ID: 211\nJob ID: 29\nUser: zdmuser\nClient: zdmhost\nJob Type: \"MIGRATE\"\nScheduled job command: \"zdmcli migrate database -sourcesid ONPR -rsp \/home\/zdmuser\/migration\/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -srcarg3 sudo_location:\/usr\/bin\/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -tgtarg3 sudo_location:\/usr\/bin\/sudo\"\nScheduled job execution start time: 2024-02-07T08:21:38+01. Equivalent local time: 2024-02-07 08:21:38\nCurrent status: FAILED\nResult file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-29-2024-02-07-08:22:03.log\"\nMetrics file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-29-2024-02-07-08:22:03.json\"\nExcluded objects file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-29-filtered-objects-2024-02-07T08:22:16.074.json\"\nJob execution start time: 2024-02-07 08:22:03\nJob execution end time: 2024-02-07 08:30:29\nJob execution elapsed time: 8 minutes 25 seconds\nZDM_VALIDATE_TGT ...................... COMPLETED\nZDM_VALIDATE_SRC ...................... COMPLETED\nZDM_SETUP_SRC ......................... COMPLETED\nZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED\nZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED\nZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED\nZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED\nZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED\nZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED\nZDM_PARALLEL_EXPORT_IMPORT ............ FAILED\nZDM_POST_DATAPUMP_SRC ................. PENDING\nZDM_POST_DATAPUMP_TGT ................. PENDING\nZDM_POST_ACTIONS ...................... PENDING\nZDM_CLEANUP_SRC ....................... PENDING\n<\/pre>\n<\/br>\n\n\n\n<p>As we can see the jobs failed during import of the data.<\/p>\n\n\n\n<p>Checking ZDM logs file I could see following errors:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ORA-39384: Warning: User USER2 has been locked and the password expired.\nORA-39384: Warning: User USER1 has been locked and the password expired.<\/code><\/pre>\n\n\n\n<p>Checking the user on the source, I could see that USER1 and USER2 is having only password in old 10G version, which definitively will make problem :<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\nSQL&gt; select username, account_status, lock_date, password_versions from dba_users where ORACLE_MAINTAINED='N';\n\nUSERNAME                       ACCOUNT_STATUS                   LOCK_DATE            PASSWORD_VERSIONS\n------------------------------ -------------------------------- -------------------- -----------------\nUSER1                          OPEN                                                  10G\nUSER2                          OPEN                                                  10G\nUSER3                          OPEN                                                  10G 11G 12C\n\n3 rows selected.\n<\/pre>\n<\/br>\n\n\n\n<p>Checking on the target PDB on the ExaCC, I could see that, as these 2 users were having 10G password, ZDM, after importing the data, locked the related users:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\nSQL&gt; select username, account_status, lock_date from dba_users where ORACLE_MAINTAINED='N';\n\nUSERNAME                       ACCOUNT_STATUS                   LOCK_DATE\n------------------------------ -------------------------------- --------------------\nUSER1                          EXPIRED &amp; LOCKED                 07-FEB-2024 08:26:10\nADMIN                          LOCKED                           06-FEB-2024 14:36:18\nUSER2                          EXPIRED &amp; LOCKED                 07-FEB-2024 08:26:10\nUSER3                          OPEN\n\n4 rows selected.\n<\/pre>\n<\/br>\n\n\n\n<p>On the ExaCC target PDB, I unlocked the user and changed the password.<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1,5,9,13,17]\">\nSQL&gt; alter user USER2 account unlock;\n\nUser altered.\n\nSQL&gt; alter user user1 account unlock;\n\nUser altered.\n\nSQL&gt; alter user USER2 identified by ************;\n\nUser altered.\n\nSQL&gt; alter user user1 identified by ************;\n\nUser altered.\n\nSQL&gt; select username, account_status, lock_date from dba_users where ORACLE_MAINTAINED='N';\n\nUSERNAME                       ACCOUNT_STATUS                   LOCK_DATE\n------------------------------ -------------------------------- --------------------\nUSER1                          OPEN\nADMIN                          LOCKED                           06-FEB-2024 14:36:18\nUSER2                          OPEN\nUSER3                          OPEN\n\n6 rows selected.\n<\/pre>\n<\/br>\n\n\n\n<p>And I resumed the zdmcli jobs so he would start again where it was failing:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli resume job -jobid 29\nzdmhost.domain.com: Audit ID: 213\n<\/pre>\n<\/br>\n\n\n\n<p>The job was still failing at the same step, and in the log file I could find several errors like :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>BATCH1 : Non-ignorable errors found in Oracle Data Pump job ZDM_29_DP_IMPORT_5005_BATCH1 log are\nORA-39151: Table \"USER3\".\"OPB_MAP_OPTIONS\" exists. All dependent metadata and data will be skipped due to table_exists_action of skip\nORA-39151: Table \"USER3\".\"OPB_USER_GROUPS\" exists. All dependent metadata and data will be skipped due to table_exists_action of skip<\/code><\/pre>\n\n\n\n<p>In fact, as ZDM previously failed on the import step, ZDM tried to import the data again. But table was still there.<\/p>\n\n\n\n<p>So I had to cleanup the Target PDB on the ExaCC for USER3 and USER2. USER1 had no objects.<\/p>\n\n\n\n<p>As I did not want to change the on-premise source database, changing user password, I checked how the users were created on the ExaCC, I dropped them to create them again before resuming the jobs.<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1,2,11,20,24,28,34]\">\nSQL&gt; set long 99999999\nSQL&gt; select dbms_metadata.get_ddl('USER','USER2') from dual;\n\nDBMS_METADATA.GET_DDL('USER','USER2')\n--------------------------------------------------------------------------------\n\n   CREATE USER \"USER2\" IDENTIFIED BY VALUES 'S:C5EF**********3F79'\n      DEFAULT TABLESPACE \"TSP******\"\n      TEMPORARY TABLESPACE \"TEMP\"\n\nSQL&gt; select dbms_metadata.get_ddl('USER','USER3') from dual;\n\nDBMS_METADATA.GET_DDL('USER','USER3')\n--------------------------------------------------------------------------------\n\n   CREATE USER \"USER3\" IDENTIFIED BY VALUES 'S:EDD8**********FD44'\n      DEFAULT TABLESPACE \"TSP******\"\n      TEMPORARY TABLESPACE \"TEMP\"\n\nSQL&gt; drop user USER2 cascade;\n\nUser dropped.\n\nSQL&gt; drop user USER3 cascade;\n\nUser dropped.\n\nSQL&gt; CREATE USER \"USER3\" IDENTIFIED BY VALUES 'S:EDD86**********8FD44'\n  2  DEFAULT TABLESPACE \"TSP******\"\n  3  TEMPORARY TABLESPACE \"TEMP\";\n\nUser created.\n\nSQL&gt; CREATE USER \"USER2\" IDENTIFIED BY VALUES 'S:C5EF**********3F79'\n  2  DEFAULT TABLESPACE \"TSP******\"\n  3  TEMPORARY TABLESPACE \"TEMP\";\n\nUser created.\n<\/pre>\n<\/br>\n\n\n\n<p>And I resumed the job once again:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli resume job -jobid 29\nzdmhost.domain.com: Audit ID: 219\n<\/pre>\n<\/br>\n\n\n\n<p>And now the migration has been completed successfully. The job type is MIGRATE now and Current Status is SUCCEEDED:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1,6,9]\">\n[zdmuser@zdmhost migration]$ \/u01\/app\/oracle\/product\/zdm\/bin\/zdmcli query job -jobid 29\nzdmhost.domain.com: Audit ID: 223\nJob ID: 29\nUser: zdmuser\nClient: zdmhost\nJob Type: \"MIGRATE\"\nScheduled job command: \"zdmcli migrate database -sourcesid ONPR -rsp \/home\/zdmuser\/migration\/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -srcarg3 sudo_location:\/usr\/bin\/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:\/home\/zdmuser\/.ssh\/id_rsa -tgtarg3 sudo_location:\/usr\/bin\/sudo\"\nScheduled job execution start time: 2024-02-07T08:21:38+01. Equivalent local time: 2024-02-07 08:21:38\nCurrent status: SUCCEEDED\nResult file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-29-2024-02-07-08:22:03.log\"\nMetrics file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-29-2024-02-07-08:22:03.json\"\nExcluded objects file path: \"\/u01\/app\/oracle\/chkbase\/scheduled\/job-29-filtered-objects-2024-02-07T08:22:16.074.json\"\nJob execution start time: 2024-02-07 08:22:03\nJob execution end time: 2024-02-07 09:01:21\nJob execution elapsed time: 14 minutes 43 seconds\nZDM_VALIDATE_TGT ...................... COMPLETED\nZDM_VALIDATE_SRC ...................... COMPLETED\nZDM_SETUP_SRC ......................... COMPLETED\nZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED\nZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED\nZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED\nZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED\nZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED\nZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED\nZDM_PARALLEL_EXPORT_IMPORT ............ COMPLETED\nZDM_POST_DATAPUMP_SRC ................. COMPLETED\nZDM_POST_DATAPUMP_TGT ................. COMPLETED\nZDM_POST_ACTIONS ...................... COMPLETED\nZDM_CLEANUP_SRC ....................... COMPLETED\n<\/pre>\n<\/br>\n\n\n\n<p>ZDM log file:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\n[zdmuser@zdmhost ~]$ tail -37 \/u01\/app\/oracle\/chkbase\/scheduled\/job-29-2024-02-07-08:22:03.log\n####################################################################\nzdmhost: 2024-02-07T07:56:33.580Z : Resuming zero downtime migrate operation ...\nzdmhost: 2024-02-07T07:56:33.587Z : Starting zero downtime migrate operation ...\nzdmhost: 2024-02-07T07:56:37.205Z : Fetching details of on premises Exadata Database \"ONPRZ_APP_001T_PRI.domain.com\"\nzdmhost: 2024-02-07T07:56:37.205Z : Type of database : \"Exadata at Customer\"\nzdmhost: 2024-02-07T07:56:37.283Z : Skipping phase ZDM_VALIDATE_SRC on resume\nzdmhost: 2024-02-07T07:56:37.365Z : Skipping phase ZDM_SETUP_SRC on resume\nzdmhost: 2024-02-07T07:56:37.377Z : Skipping phase ZDM_PRE_MIGRATION_ADVISOR on resume\nzdmhost: 2024-02-07T07:56:37.391Z : Skipping phase ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC on resume\nzdmhost: 2024-02-07T07:56:37.406Z : Skipping phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT on resume\nzdmhost: 2024-02-07T07:56:37.422Z : Skipping phase ZDM_PREPARE_DATAPUMP_SRC on resume\nzdmhost: 2024-02-07T07:56:37.437Z : Skipping phase ZDM_DATAPUMP_ESTIMATE_SRC on resume\nzdmhost: 2024-02-07T07:56:37.455Z : Skipping phase ZDM_PREPARE_DATAPUMP_TGT on resume\nzdmhost: 2024-02-07T07:56:37.471Z : Executing phase ZDM_PARALLEL_EXPORT_IMPORT\nzdmhost: 2024-02-07T07:56:37.482Z : Skipping phase ZDM_DATAPUMP_EXPORT_SRC_BATCH1 on resume\nzdmhost: 2024-02-07T07:56:37.485Z : Skipping phase ZDM_TRANSFER_DUMPS_SRC_BATCH1 on resume\nzdmhost: 2024-02-07T07:56:37.487Z : Executing phase ZDM_DATAPUMP_IMPORT_TGT_BATCH1\nzdmhost: 2024-02-07T07:56:38.368Z : listing directory path \/mnt\/nfs_nfs_share\/ONPR on node ExaCC-cl01n1.domain.com ...\nzdmhost: 2024-02-07T07:56:39.474Z : Oracle Data Pump Import parallelism set to 1 ...\nzdmhost: 2024-02-07T07:56:39.481Z : Oracle Data Pump errors to be ignored are ORA-31684,ORA-39111,ORA-39082...\nzdmhost: 2024-02-07T07:56:39.481Z : starting Data Pump Import for database \"ONPRZ_APP_001T.DOMAIN.COM\"\nzdmhost: 2024-02-07T07:56:39.482Z : running Oracle Data Pump job \"ZDM_29_DP_IMPORT_5005_BATCH1\" for database \"ONPRZ_APP_001T.DOMAIN.COM\"\nzdmhost: 2024-02-07T08:00:46.569Z : Oracle Data Pump job \"ZDM_29_DP_IMPORT_5005_BATCH1\" for database \"ONPRZ_APP_001T.DOMAIN.COM\" completed.\nzdmhost: 2024-02-07T08:00:46.569Z : Oracle Data Pump log located at \/mnt\/nfs_nfs_share\/ONPR\/ZDM_29_DP_IMPORT_5005_BATCH1.log in the Database Server node\nzdmhost: 2024-02-07T08:01:17.239Z : Execution of phase ZDM_DATAPUMP_IMPORT_TGT_BATCH1 completed\nzdmhost: 2024-02-07T08:01:17.248Z : Execution of phase ZDM_PARALLEL_EXPORT_IMPORT completed\nzdmhost: 2024-02-07T08:01:17.268Z : Executing phase ZDM_POST_DATAPUMP_SRC\nzdmhost: 2024-02-07T08:01:17.272Z : listing directory path \/mnt\/nfs_share\/ONPR\/ on node vmonpr.domain.com ...\nzdmhost: 2024-02-07T08:01:17.811Z : deleting Data Pump dump in directory path \/mnt\/nfs_share\/ONPR\/ on node vmonpr.domain.com ...\nzdmhost: 2024-02-07T08:01:19.052Z : Execution of phase ZDM_POST_DATAPUMP_SRC completed\nzdmhost: 2024-02-07T08:01:19.070Z : Executing phase ZDM_POST_DATAPUMP_TGT\nzdmhost: 2024-02-07T08:01:19.665Z : Execution of phase ZDM_POST_DATAPUMP_TGT completed\nzdmhost: 2024-02-07T08:01:19.689Z : Executing phase ZDM_POST_ACTIONS\nzdmhost: 2024-02-07T08:01:19.693Z : Execution of phase ZDM_POST_ACTIONS completed\nzdmhost: 2024-02-07T08:01:19.716Z : Executing phase ZDM_CLEANUP_SRC\nzdmhost: 2024-02-07T08:01:20.213Z : Cleaning up ZDM on the source node vmonpr ...\nzdmhost: 2024-02-07T08:01:21.458Z : Execution of phase ZDM_CLEANUP_SRC completed\n[zdmuser@zdmhost ~]$\n<\/pre>\n<\/br>\n\n\n\n<p>If we check the ZDM import log saved on the NFS shared folder, here named ZDM_32_DP_IMPORT_1847_BATCH1.log, we would see that the import has been done successfully with 3 errors. The 3 errors are displayed in the same log file and are:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>09-FEB-24 10:00:22.534: W-1 Processing object type SCHEMA_EXPORT\/USER\n09-FEB-24 10:00:22.943: ORA-31684: Object type USER:\"USER1\" already exists\n09-FEB-24 10:00:22.943: ORA-31684: Object type USER:\"USER2\" already exists\n09-FEB-24 10:00:22.943: ORA-31684: Object type USER:\"USER3\" already exists<\/code><\/pre>\n\n\n\n<p>These errors are here because we created the user on the ExaCC target DB previously to resuming zdmcli job, thus before performing the import again. These errors are fortunately part of the list that ZDM would ignore, which make senses.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-checks\">Checks<\/h2>\n\n\n\n<p>We can then of course do some tests as comparing the number of objects for the migrated users on the source and the target, checking pdb violation, checking invalid objects, ensuring that tablespace are encrypted on the ExaCC target DB, and so on.<\/p>\n\n\n\n<p>To compare number of objects:<\/p>\n\n\n\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\nSQL&gt; select owner, count(*) from dba_objects where owner in ('USER1','USER2','USER3') group by owner order by 1;\n\nOWNER             COUNT(*)\n--------------- ----------\nUSER3                758\nUSER2                760\n<\/pre>\n<\/br>\n\n\n\n<p>To check that tablespace are encrypted:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SQL&gt; select a.con_id, a.tablespace_name, nvl(b.ENCRYPTIONALG,'NOT ENCRYPTED') from  cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x,  V$ENCRYPTED_TABLESPACES y where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) order by 1,2;<\/code><\/pre>\n\n\n\n<p>To check pdb violations:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SQL&gt; select status, message from pdb_plug_in_violations;<\/code><\/pre>\n\n\n\n<p>To check invalid objects:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SQL&gt; select count(*) from dba_invalid_objects;<\/code><\/pre>\n\n\n\n<p>And we could, of course, if needed, relocate the PDB to another ExaCC CDB.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-conclusion\">Conclusion<\/h2>\n\n\n\n<p>That&#8217;s it. We could easily migrate a single-tenant on-premise database to ExaCC PDB using ZDM Logical Offline. The tools really have advantages. We do not need to deal with any oracle command, like running datapump on ourselves.<\/p>\n\n\n\n<p>In the next blog I will show you we migrated on-premises database to ExaCC on our customer system using ZDM Physical Online Migration.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A while ago I had been testing and blogging about ZDM, see my previous articles. And I finally had the chance to implement it at one of our customer to migrate on-premises database to Exadata Cloud @Customer. In this article I would like to share with you my experience migrating an on-premises database to ExaCC [&hellip;]<\/p>\n","protected":false},"author":48,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[59],"tags":[],"type_dbi":[],"class_list":["post-31108","post","type-post","status-publish","format-standard","hentry","category-oracle"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM) - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)\" \/>\n<meta property=\"og:description\" content=\"A while ago I had been testing and blogging about ZDM, see my previous articles. And I finally had the chance to implement it at one of our customer to migrate on-premises database to Exadata Cloud @Customer. In this article I would like to share with you my experience migrating an on-premises database to ExaCC [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-20T23:47:27+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-12T13:55:56+00:00\" \/>\n<meta name=\"author\" content=\"Marc Wagner\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Marc Wagner\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\"},\"author\":{\"name\":\"Marc Wagner\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/225d9884b8467ead9a872823acb14628\"},\"headline\":\"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)\",\"datePublished\":\"2024-02-20T23:47:27+00:00\",\"dateModified\":\"2024-04-12T13:55:56+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\"},\"wordCount\":1821,\"commentCount\":0,\"articleSection\":[\"Oracle\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\",\"name\":\"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM) - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2024-02-20T23:47:27+00:00\",\"dateModified\":\"2024-04-12T13:55:56+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/225d9884b8467ead9a872823acb14628\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/225d9884b8467ead9a872823acb14628\",\"name\":\"Marc Wagner\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/a873cc6e7fbdbbcbdbcaf5dbded14ad9a77b2ec2c3e03b4d724ed33d35d5f328?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a873cc6e7fbdbbcbdbcaf5dbded14ad9a77b2ec2c3e03b4d724ed33d35d5f328?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a873cc6e7fbdbbcbdbcaf5dbded14ad9a77b2ec2c3e03b4d724ed33d35d5f328?s=96&d=mm&r=g\",\"caption\":\"Marc Wagner\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/marc-wagner\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM) - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/","og_locale":"en_US","og_type":"article","og_title":"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)","og_description":"A while ago I had been testing and blogging about ZDM, see my previous articles. And I finally had the chance to implement it at one of our customer to migrate on-premises database to Exadata Cloud @Customer. In this article I would like to share with you my experience migrating an on-premises database to ExaCC [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/","og_site_name":"dbi Blog","article_published_time":"2024-02-20T23:47:27+00:00","article_modified_time":"2024-04-12T13:55:56+00:00","author":"Marc Wagner","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Marc Wagner","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/"},"author":{"name":"Marc Wagner","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/225d9884b8467ead9a872823acb14628"},"headline":"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)","datePublished":"2024-02-20T23:47:27+00:00","dateModified":"2024-04-12T13:55:56+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/"},"wordCount":1821,"commentCount":0,"articleSection":["Oracle"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/","url":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/","name":"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM) - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2024-02-20T23:47:27+00:00","dateModified":"2024-04-12T13:55:56+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/225d9884b8467ead9a872823acb14628"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/225d9884b8467ead9a872823acb14628","name":"Marc Wagner","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/a873cc6e7fbdbbcbdbcaf5dbded14ad9a77b2ec2c3e03b4d724ed33d35d5f328?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a873cc6e7fbdbbcbdbcaf5dbded14ad9a77b2ec2c3e03b4d724ed33d35d5f328?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a873cc6e7fbdbbcbdbcaf5dbded14ad9a77b2ec2c3e03b4d724ed33d35d5f328?s=96&d=mm&r=g","caption":"Marc Wagner"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/marc-wagner\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31108","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/48"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=31108"}],"version-history":[{"count":50,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31108\/revisions"}],"predecessor-version":[{"id":32522,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31108\/revisions\/32522"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=31108"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=31108"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=31108"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=31108"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}