A while ago I had been testing and blogging about ZDM, see my previous articles. And I finally had the chance to implement it at one of our customer to migrate on-premises database to Exadata Cloud @Customer. In this article I would like to share with you my experience migrating an on-premises database to ExaCC using ZDM Logical Offline Migration with a backup location. We intended to use this method, as mandatory one for small Oracle SE2 databases, and preferred one for huge Oracle SE2 databases.

Read more: Logical Offline Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)

Naming convention

Of course I have anonymised all outputs to remove customer infrastructure names. So let’s take following convention.

ExaCC Cluster 01 node 01 : ExaCC-cl01n1
ExaCC Cluster 01 node 02 : ExaCC-cl01n2
On premises Source Host : vmonpr
Target db_unique_name on the ExaCC : ONPR_RZ2
Database Name to migrate : ONPR
ZDM Host : zdmhost
ZDM user : zdmuser
Domain : domain.com
ExaCC PDB to migrate to : ONPRZ_APP_001T

We will then migrate on-premise Single-Tenant database, named ONPR, to a PDB on the ExaCC. The PDB will be named ONPRZ_APP_001T.

We will migrate 3 schemas : USER1, USER2 and USER3

Ports

It is important to mention that following ports are needed:

SourceDestinationPort
ZDM HostOn-premise Host22
ZDM HostOn-premise HostOracle Net
ZDM HostExaCC VM (both nodes)22
ZDM HostExaCC (scan + VIP)Oracle Net
On-premise HostNFS Server111
2049
ExaCCNFS Server111
2049

If Oracle Net ports are for example not opened between ZDM Host and ExaCC, the migration evaluation will immediately stopped at first steps named ZDM_VALIDATE_TGT, and following errors will be found in the log file:

PRGZ-3181 : Internal error: ValidateTargetDbLogicalZdm-5-PRGD-1059 : query to retrieve NLS database parameters failed
PRGD-1002 : SELECT statement "SELECT * FROM GLOBAL_NAME" execution as user "system" failed for database with Java Database Connectivity (JDBC) URL "jdbc:oracle:thin:@(description=(address=(protocol=tcp)(port=1521)(host=ExaCC-cl01-scan.domain.com))(connect_data=(service_name=ONPRZ_APP_001T_PRI.domain.com)))"
IO Error: The Network Adapter could not establish the connection (CONNECTION_ID=9/tZ9Bt5Q5q5VfqU7JC/xA==)

Requirements

There is a few requirements that are needed

streams_pool_size instance parameter on the source database

To have an initial pool allocated and optimal Data Pump performance, source DB instance parameter needs to be set to minimal 256-300 MB for Logical Offline Migration.


Passwordless Login

Passwordless Login needs to be configured between ZDM Host, the Source Host and Target Host. See my previous blog : https://www.dbi-services.com/blog/oracle-zdm-migration-java-security-invalidkeyexception-invalid-key-format/

If Passwordless Login is not configured with one node, you will see such error in the log file during migration evaluation:

PRCZ-2006 : Unable to establish SSH connection to node "ExaCC-cl01n2" to execute command "<command_to_be_executed>"
No more authentication methods available.

Database Character Set

ExaCC target CDB should be in the same character set as on-premise source db. If the final CDB where you would like to host your new PDB has got a character set of AL32UTF8 for example (so this CDB can host various PDB character set) and your source DB is not in AL32UTF8 you will need to go through a temporary CDB on the ExaCC before relocating the PDB to the final one.

To check the character set, run following statement on the on-premise source DB:

SQL> select parameter, value from v$nls_parameters where parameter='NLS_CHARACTERSET';

If your ExaCC target CDB character set (here as example AL32UTF8) does not match the on-premise source DB character set (here as example WE8ISO8859P1), you will get following ZDM error during the evaluation of the migration:

PRGZ-3549 : Source NLS character set WE8ISO8859P1 is different from target NLS character set AL32UTF8.

Create PDB on the ExaCC

Final PDB will have to be created in one of the ExaCC container database according to the character set of the source database.


Create NFS directory

NFS directory and Oracle directories need to be setup to store Oracle dump file created automatically by ZDM. We will create the file system directory on the NFS Mount point and a new Oracle Directory named MIG_SOURCE_DEST in both databases (source and target). NFS directory should be accessible and shared between both environments.

If you do not have any shared NFS between source and target, you will get following kind of errors when evaluating the migration:

zdmhost: 2024-02-06T14:14:17.001Z : Executing phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT
zdmhost: 2024-02-06T14:14:19.583Z : validating Oracle Data Pump dump directory /u02/app/oracle/product/19.0.0.0/dbhome_2/rdbms/log/10B7A59DF2E82A9AE063021FA10ABD38 ...
zdmhost: 2024-02-06T14:14:19.587Z : listing directory path /u02/app/oracle/product/19.0.0.0/dbhome_2/rdbms/log/10B7A59DF2E82A9AE063021FA10ABD38 on node ExaCC-cl01n1.domain.com ...
PRGZ-1211 : failed to validate specified database directory object path "/u02/app/oracle/product/19.0.0.0/dbhome_2/rdbms/log/10B7A59DF2E82A9AE063021FA10ABD38"
PRGZ-1420 : specified database import directory object path "/u02/app/oracle/product/19.0.0.0/dbhome_2/rdbms/log/10B7A59DF2E82A9AE063021FA10ABD38" is not shared between source and target database server

After having created the directory on the shared NFS, directory which will be shared on both the source and the target, you will need to create (or use an existing one) an oracle directory. I decided to create a new one, named MIG_SOURCE_DEST. The following will have to be run on both the source and the target databases.

SQL> create directory MIG_SOURCE_DEST as '/mnt/nfs_share/ONPR/';

Directory created.

SQL> select directory_name, directory_path from dba_directories where upper(directory_name) like '%MIG%';

DIRECTORY_NAME                 DIRECTORY_PATH
------------------------------ ------------------------------------------------------------
MIG_SOURCE_DEST                /mnt/nfs_share/ONPR/

You will also need to set correct permissions to the folder knowing that ExaCC OS user might not have the same id than the Source Host OS user.


Source user password version

It is mandatory that the password for all user schemas been migrated is in at least 12c versions. For old password version like 10G or 11G, password for user needs to be change to avoid additional troubleshooting and actions with ZDM migration.

To check user password version on the source, run following SQL statement:

SQL> select username, account_status, lock_date, password_versions from dba_users where ORACLE_MAINTAINED='N';

Prepare ZDM response file

We will use ZDM response file template named zdm_logical_template.rsp and adapt it.

[zdmuser@zdmhost migration]$ cp /u01/app/oracle/product/zdm/rhp/zdm/template/zdm_logical_template.rsp ./zdm_ONPR_logical_offline.rsp

The main parameters to take care of are :

ParameterExplanation
DATA_TRANSFER_MEDIUMSpecifies how data will be transferred from the source database system to the target database system.
To be NFS
TARGETDATABASE_ADMINUSERNAMEUser to be used on the target for the migration.
To be SYSTEM
SOURCEDATABASE_ADMINUSERNAMEUser to be used on the source for the migration.
To be SYSTEM
SOURCEDATABASE_CONNECTIONDETAILS_HOSTSource listener host
SOURCEDATABASE_CONNECTIONDETAILS_PORTSource listener port.
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAMESource database service name
TARGETDATABASE_CONNECTIONDETAILS_HOSTTarget listener host (on ExaCC scan listener)
TARGETDATABASE_CONNECTIONDETAILS_PORTTarget listener port.
To be 1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAMETarget database service name
TARGETDATABASE_DBTYPETarget environment
To be EXADATA
DATAPUMPSETTINGS_SCHEMABATCH-1Comma separated list of Database schemas to be migrated
DATAPUMPSETTINGS_SCHEMABATCHCOUNTExclusive with schemaBatch option. If specified, user schemas are identified
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREEMaximum number of worker processes that can be used for a Data Pump Import job.
Value should not be set for SE2.
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREEMaximum number of worker processes that can be used for a Data Pump Export job.
Value should not be set for SE2.
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELISTSpecifies a comma separated list of object types to exclude
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAMEOracle DBA directory that was created on the source for the export
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATHNFS directory for dump that is used for export
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAMEOracle DBA directory that was created on the source for the import
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATHNFS directory for dump that is used for import
TABLESPACEDETAILS_AUTOCREATEIf set to TRUE, ZDM will automatically create the tablespaces
To be TRUE

Updated ZDM response file compared to ZDM template for the migration we are going to run:

[zdmuser@zdmhost migration]$ diff zdm_ONPR_logical_offline.rsp /u01/app/oracle/product/zdm/rhp/zdm/template/zdm_logical_template.rsp
30c30
< DATA_TRANSFER_MEDIUM=NFS
---
> DATA_TRANSFER_MEDIUM=OSS
47c47
< TARGETDATABASE_ADMINUSERNAME=system
---
> TARGETDATABASE_ADMINUSERNAME=
63c63
< SOURCEDATABASE_ADMINUSERNAME=system
---
> SOURCEDATABASE_ADMINUSERNAME=
80c80
< SOURCEDATABASE_CONNECTIONDETAILS_HOST=vmonpr
---
> SOURCEDATABASE_CONNECTIONDETAILS_HOST=
90c90
< SOURCEDATABASE_CONNECTIONDETAILS_PORT=13000
---
> SOURCEDATABASE_CONNECTIONDETAILS_PORT=
102c102
< SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=ONPR.domain.com
---
> SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=
153c153
< TARGETDATABASE_CONNECTIONDETAILS_HOST=ExaCC-cl01-scan.domain.com
---
> TARGETDATABASE_CONNECTIONDETAILS_HOST=
163c163
< TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
---
> TARGETDATABASE_CONNECTIONDETAILS_PORT=
175c175
< TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=ONPRZ_APP_001T_PRI.domain.com
---
> TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=
307c307
< TARGETDATABASE_DBTYPE=EXADATA
---
> TARGETDATABASE_DBTYPE=
726c726
< DATAPUMPSETTINGS_SCHEMABATCH-1=USER1,USER2,USER3
---
> DATAPUMPSETTINGS_SCHEMABATCH-1=
947c947
< DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=1
---
> DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=
957c957
< DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=1
---
> DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=
969c969
< DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST=STATISTICS
---
> DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST=
1137c1137
< DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=MIG_SOURCE_DEST
---
> DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=
1146c1146
< DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/mnt/nfs_share/ONPR
---
> DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=
1166c1166
< DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=MIG_SOURCE_DEST
---
> DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=
1175c1175
< DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=/mnt/nfs_nfs_share/ONPR
---
> DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=
2146c2146
< TABLESPACEDETAILS_AUTOCREATE=TRUE
---
> TABLESPACEDETAILS_AUTOCREATE=

ZDM Build Version

I’m using ZDM build 21.4.

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli -build
version: 21.0.0.0.0
full version: 21.4.0.0.0
patch version: 21.4.1.0.0
label date: 221207.25
ZDM kit build date: Jul 31 2023 14:24:25 UTC
CPAT build version: 23.7.0

The migration will be done using ZDM CLI (zdmcli), which run migration through jobs. We can abort, query, modify, suspend or resume a running job.

Evaluate the migration

We will first run zdmcli with the -eval option to evaluate the migration and test if all is ok.

We need to provide some arguments :

ArgumentValue
-sourcesidDatabase Name of the source database in case the source database is a single instance deployed on a non Grid Infrastructure environment
-rspZDM response file
-sourcenodeSource host
-srcauth with 3 sub-arguments:
-srcarg1
-srcarg2
-srcarg3
Name of the source authentication plug-in with 3 sub-arguments:
1st argument: user. Should be oracle
2nd argument: ZDM private RSA Key
3rd argument: sudo location
-targetnodeTarget host
-tgtauth with 3 sub-arguments:
-tgtarg1
-tgtarg2
-tgtarg3
Name of the target authentication plug-in with 3 sub-arguments:
1st argument: user. Should be opc
2nd argument: ZDM private RSA Key
3rd argument: sudo location
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -eval
zdmhost.domain.com: Audit ID: 194
Enter source database administrative user "system" password:
Enter target database administrative user "system" password:
Operation "zdmcli migrate database" scheduled with the job ID "27".
[zdmuser@zdmhost migration]$

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 27
zdmhost.domain.com: Audit ID: 197
Job ID: 27
User: zdmuser
Client: zdmhost
Job Type: "EVAL"
Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -eval"
Scheduled job execution start time: 2024-02-06T16:03:49+01. Equivalent local time: 2024-02-06 16:03:49
Current status: SUCCEEDED
Result file path: "/u01/app/oracle/chkbase/scheduled/job-27-2024-02-06-16:04:01.log"
Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-27-2024-02-06-16:04:01.json"
Excluded objects file path: "/u01/app/oracle/chkbase/scheduled/job-27-filtered-objects-2024-02-06T16:04:13.522.json"
Job execution start time: 2024-02-06 16:04:01
Job execution end time: 2024-02-06 16:05:55
Job execution elapsed time: 1 minutes 54 seconds
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_CLEANUP_SRC ....................... COMPLETED

We can see that the Job Type is EVAL, and that the Current Status is SUCCEEDED with all prechecks steps having a COMPLETED status.

We can also review the log file which will provide us more information. We will see all the checks that the tool is doing. We can also review the output of the advisor which is already warning us about old password for users. Reviewing all the advisor outputs might help. We can also see that ZDM will ignore as non critical a few ORA errors. This make senses because the migration should still happen even if the user is already created with empty objects for example.

[zdmuser@zdmhost ~]$ cat /u01/app/oracle/chkbase/scheduled/job-27-2024-02-06-16:04:01.log
zdmhost: 2024-02-06T15:04:01.505Z : Starting zero downtime migrate operation ...
zdmhost: 2024-02-06T15:04:01.511Z : Executing phase ZDM_VALIDATE_TGT
zdmhost: 2024-02-06T15:04:04.952Z : Fetching details of on premises Exadata Database "ONPRZ_APP_001T_PRI.domain.com"
zdmhost: 2024-02-06T15:04:04.953Z : Type of database : "Exadata at Customer"
zdmhost: 2024-02-06T15:04:05.014Z : Verifying configuration and status of target database "ONPRZ_APP_001T_PRI.domain.com"
zdmhost: 2024-02-06T15:04:09.067Z : Global database name: ONPRZ_APP_001T.DOMAIN.COM
zdmhost: 2024-02-06T15:04:09.067Z : Target PDB name : ONPRZ_APP_001T
zdmhost: 2024-02-06T15:04:09.068Z : Database major version : 19
zdmhost: 2024-02-06T15:04:09.069Z : obtaining database ONPRZ_APP_001T.DOMAIN.COM tablespace configuration details...
zdmhost: 2024-02-06T15:04:09.585Z : Execution of phase ZDM_VALIDATE_TGT completed
zdmhost: 2024-02-06T15:04:09.670Z : Executing phase ZDM_VALIDATE_SRC
zdmhost: 2024-02-06T15:04:09.736Z : Verifying configuration and status of source database "ONPR.domain.com"
zdmhost: 2024-02-06T15:04:09.737Z : source database host vmonpr service ONPR.domain.com
zdmhost: 2024-02-06T15:04:13.464Z : Global database name: ONPR.DOMAIN.COM
zdmhost: 2024-02-06T15:04:13.465Z : Database major version : 19
zdmhost: 2024-02-06T15:04:13.466Z : Validating database time zone compatibility...
zdmhost: 2024-02-06T15:04:13.521Z : Database objects which will be migrated : [USER2, USER3]
zdmhost: 2024-02-06T15:04:13.530Z : Execution of phase ZDM_VALIDATE_SRC completed
zdmhost: 2024-02-06T15:04:13.554Z : Executing phase ZDM_SETUP_SRC
zdmhost: 2024-02-06T15:05:04.925Z : Execution of phase ZDM_SETUP_SRC completed
zdmhost: 2024-02-06T15:05:04.944Z : Executing phase ZDM_PRE_MIGRATION_ADVISOR
zdmhost: 2024-02-06T15:05:05.371Z : Running CPAT (Cloud Premigration Advisor Tool) on the source node vmonpr ...
zdmhost: 2024-02-06T15:05:07.894Z : Premigration advisor output:
Cloud Premigration Advisor Tool Version 23.7.0
CPAT-4007: Warning: the build date for this version of the Cloud Premigration Advisor Tool is over 216 days.  Please run "premigration.sh --updatecheck" to see if a more recent version of this tool is available.
Please download the latest available version of the CPAT application.

Cloud Premigration Advisor Tool completed with overall result: Review Required
Cloud Premigration Advisor Tool generated report location: /u00/app/oracle/zdm/zdm_ONPR_27/out/premigration_advisor_report.json
Cloud Premigration Advisor Tool generated report location: /u00/app/oracle/zdm/zdm_ONPR_27/out/premigration_advisor_report.txt

 CPAT exit code: 2
 RESULT: Review Required

Schemas Analyzed (2): USER3,USER2
A total of 17 checks were performed
There were 0 checks with Failed results
There were 0 checks with Action Required results
There were 2 checks with Review Required results: has_noexport_object_grants (8 relevant objects), has_users_with_10g_password_version (1 relevant objects)
There were 0 checks with Review Suggested results has_noexport_object_grants
         RESULT: Review Required
         DESCRIPTION: Not all object grants are exported by Data Pump.
         ACTION: Recreate any required grants on the target instance.  See Oracle Support Document ID 1911151.1 for more information. Note that any SELECT grants on system objects will need to be replaced with READ grants; SELECT is no longer allowed on system objects.
has_users_with_10g_password_version
         RESULT: Review Required
         DESCRIPTION: Case-sensitive passwords are required on ADB.
         ACTION: To avoid Data Pump migration warnings change the passwords for the listed users before migration. Alternatively, modify these users passwords after migration to avoid login failures. See Oracle Support Document ID 2289453.1 for more information.

zdmhost: 2024-02-06T15:05:07.894Z : Execution of phase ZDM_PRE_MIGRATION_ADVISOR completed
zdmhost: 2024-02-06T15:05:07.948Z : Executing phase ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC
zdmhost: 2024-02-06T15:05:08.545Z : validating Oracle Data Pump dump directory /mnt/nfs_share/ONPR/ ...
zdmhost: 2024-02-06T15:05:08.545Z : validating Data Pump dump directory path /mnt/nfs_share/ONPR/ on node vmonpr.domain.com ...
zdmhost: 2024-02-06T15:05:08.975Z : validating if target database user can read files shared on medium NFS
zdmhost: 2024-02-06T15:05:08.976Z : setting Data Pump dump file permission at source node...
zdmhost: 2024-02-06T15:05:08.977Z : changing group of Data Pump dump files in directory path /mnt/nfs_share/ONPR/ on node vmonpr.domain.com ...
zdmhost: 2024-02-06T15:05:09.958Z : Execution of phase ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC completed
zdmhost: 2024-02-06T15:05:10.005Z : Executing phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT
zdmhost: 2024-02-06T15:05:13.307Z : validating Oracle Data Pump dump directory /mnt/nfs_nfs_share/ONPR ...
zdmhost: 2024-02-06T15:05:13.308Z : listing directory path /mnt/nfs_nfs_share/ONPR on node ExaCC-cl01n1.domain.com ...
zdmhost: 2024-02-06T15:05:14.008Z : Execution of phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT completed
zdmhost: 2024-02-06T15:05:14.029Z : Executing phase ZDM_PREPARE_DATAPUMP_SRC
zdmhost: 2024-02-06T15:05:14.033Z : Execution of phase ZDM_PREPARE_DATAPUMP_SRC completed
zdmhost: 2024-02-06T15:05:14.058Z : Executing phase ZDM_DATAPUMP_ESTIMATE_SRC
zdmhost: 2024-02-06T15:05:14.059Z : starting Data Pump Dump estimate for database "ONPR.DOMAIN.COM"
zdmhost: 2024-02-06T15:05:14.060Z : running Oracle Data Pump job "ZDM_27_DP_ESTIMATE_6279" for database "ONPR.DOMAIN.COM"
zdmhost: 2024-02-06T15:05:14.071Z : applying Data Pump dump compression ALL algorithm MEDIUM
zdmhost: 2024-02-06T15:05:14.135Z : applying Data Pump dump encryption ALL algorithm AES128
zdmhost: 2024-02-06T15:05:14.135Z : Oracle Data Pump Export parallelism set to 1 ...
zdmhost: 2024-02-06T15:05:14.286Z : Oracle Data Pump errors to be ignored are ORA-31684,ORA-39111,ORA-39082...
zdmhost: 2024-02-06T15:05:23.515Z : Oracle Data Pump log located at /mnt/nfs_share/ONPR//ZDM_27_DP_ESTIMATE_6279.log in the Database Server node
zdmhost: 2024-02-06T15:05:53.643Z : Total estimation using BLOCKS method: 3.112 GB
zdmhost: 2024-02-06T15:05:53.644Z : Execution of phase ZDM_DATAPUMP_ESTIMATE_SRC completed
zdmhost: 2024-02-06T15:05:53.721Z : Executing phase ZDM_CLEANUP_SRC
zdmhost: 2024-02-06T15:05:54.261Z : Cleaning up ZDM on the source node vmonpr ...
zdmhost: 2024-02-06T15:05:55.506Z : Execution of phase ZDM_CLEANUP_SRC completed

Migrate Source database to ExaCC

Once the evaluation is all good, we can move forward with running the migration. It is exactly the same zdmcli command without the option -eval.

Let’s have a try and run it. We will have to provide both source and target system password:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo
zdmhost.domain.com: Audit ID: 205
Enter source database administrative user "system" password:
Enter target database administrative user "system" password:
Operation "zdmcli migrate database" scheduled with the job ID "29".

We will query the job:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 29
zdmhost.domain.com: Audit ID: 211
Job ID: 29
User: zdmuser
Client: zdmhost
Job Type: "MIGRATE"
Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo"
Scheduled job execution start time: 2024-02-07T08:21:38+01. Equivalent local time: 2024-02-07 08:21:38
Current status: FAILED
Result file path: "/u01/app/oracle/chkbase/scheduled/job-29-2024-02-07-08:22:03.log"
Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-29-2024-02-07-08:22:03.json"
Excluded objects file path: "/u01/app/oracle/chkbase/scheduled/job-29-filtered-objects-2024-02-07T08:22:16.074.json"
Job execution start time: 2024-02-07 08:22:03
Job execution end time: 2024-02-07 08:30:29
Job execution elapsed time: 8 minutes 25 seconds
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED
ZDM_PARALLEL_EXPORT_IMPORT ............ FAILED
ZDM_POST_DATAPUMP_SRC ................. PENDING
ZDM_POST_DATAPUMP_TGT ................. PENDING
ZDM_POST_ACTIONS ...................... PENDING
ZDM_CLEANUP_SRC ....................... PENDING

As we can see the jobs failed during import of the data.

Checking ZDM logs file I could see following errors:

ORA-39384: Warning: User USER2 has been locked and the password expired.
ORA-39384: Warning: User USER1 has been locked and the password expired.

Checking the user on the source, I could see that USER1 and USER2 is having only password in old 10G version, which definitively will make problem :

SQL> select username, account_status, lock_date, password_versions from dba_users where ORACLE_MAINTAINED='N';

USERNAME                       ACCOUNT_STATUS                   LOCK_DATE            PASSWORD_VERSIONS
------------------------------ -------------------------------- -------------------- -----------------
USER1                          OPEN                                                  10G
USER2                          OPEN                                                  10G
USER3                          OPEN                                                  10G 11G 12C

3 rows selected.

Checking on the target PDB on the ExaCC, I could see that, as these 2 users were having 10G password, ZDM, after importing the data, locked the related users:

SQL> select username, account_status, lock_date from dba_users where ORACLE_MAINTAINED='N';

USERNAME                       ACCOUNT_STATUS                   LOCK_DATE
------------------------------ -------------------------------- --------------------
USER1                          EXPIRED & LOCKED                 07-FEB-2024 08:26:10
ADMIN                          LOCKED                           06-FEB-2024 14:36:18
USER2                          EXPIRED & LOCKED                 07-FEB-2024 08:26:10
USER3                          OPEN

4 rows selected.

On the ExaCC target PDB, I unlocked the user and changed the password.

SQL> alter user USER2 account unlock;

User altered.

SQL> alter user user1 account unlock;

User altered.

SQL> alter user USER2 identified by ************;

User altered.

SQL> alter user user1 identified by ************;

User altered.

SQL> select username, account_status, lock_date from dba_users where ORACLE_MAINTAINED='N';

USERNAME                       ACCOUNT_STATUS                   LOCK_DATE
------------------------------ -------------------------------- --------------------
USER1                          OPEN
ADMIN                          LOCKED                           06-FEB-2024 14:36:18
USER2                          OPEN
USER3                          OPEN

6 rows selected.

And I resumed the zdmcli jobs so he would start again where it was failing:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli resume job -jobid 29
zdmhost.domain.com: Audit ID: 213

The job was still failing at the same step, and in the log file I could find several errors like :

BATCH1 : Non-ignorable errors found in Oracle Data Pump job ZDM_29_DP_IMPORT_5005_BATCH1 log are
ORA-39151: Table "USER3"."OPB_MAP_OPTIONS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "USER3"."OPB_USER_GROUPS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip

In fact, as ZDM previously failed on the import step, ZDM tried to import the data again. But table was still there.

So I had to cleanup the Target PDB on the ExaCC for USER3 and USER2. USER1 had no objects.

As I did not want to change the on-premise source database, changing user password, I checked how the users were created on the ExaCC, I dropped them to create them again before resuming the jobs.

SQL> set long 99999999
SQL> select dbms_metadata.get_ddl('USER','USER2') from dual;

DBMS_METADATA.GET_DDL('USER','USER2')
--------------------------------------------------------------------------------

   CREATE USER "USER2" IDENTIFIED BY VALUES 'S:C5EF**********3F79'
      DEFAULT TABLESPACE "TSP******"
      TEMPORARY TABLESPACE "TEMP"

SQL> select dbms_metadata.get_ddl('USER','USER3') from dual;

DBMS_METADATA.GET_DDL('USER','USER3')
--------------------------------------------------------------------------------

   CREATE USER "USER3" IDENTIFIED BY VALUES 'S:EDD8**********FD44'
      DEFAULT TABLESPACE "TSP******"
      TEMPORARY TABLESPACE "TEMP"

SQL> drop user USER2 cascade;

User dropped.

SQL> drop user USER3 cascade;

User dropped.

SQL> CREATE USER "USER3" IDENTIFIED BY VALUES 'S:EDD86**********8FD44'
  2  DEFAULT TABLESPACE "TSP******"
  3  TEMPORARY TABLESPACE "TEMP";

User created.

SQL> CREATE USER "USER2" IDENTIFIED BY VALUES 'S:C5EF**********3F79'
  2  DEFAULT TABLESPACE "TSP******"
  3  TEMPORARY TABLESPACE "TEMP";

User created.

And I resumed the job once again:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli resume job -jobid 29
zdmhost.domain.com: Audit ID: 219

And now the migration has been completed successfully. The job type is MIGRATE now and Current Status is SUCCEEDED:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 29
zdmhost.domain.com: Audit ID: 223
Job ID: 29
User: zdmuser
Client: zdmhost
Job Type: "MIGRATE"
Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_logical_offline.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo"
Scheduled job execution start time: 2024-02-07T08:21:38+01. Equivalent local time: 2024-02-07 08:21:38
Current status: SUCCEEDED
Result file path: "/u01/app/oracle/chkbase/scheduled/job-29-2024-02-07-08:22:03.log"
Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-29-2024-02-07-08:22:03.json"
Excluded objects file path: "/u01/app/oracle/chkbase/scheduled/job-29-filtered-objects-2024-02-07T08:22:16.074.json"
Job execution start time: 2024-02-07 08:22:03
Job execution end time: 2024-02-07 09:01:21
Job execution elapsed time: 14 minutes 43 seconds
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED
ZDM_PARALLEL_EXPORT_IMPORT ............ COMPLETED
ZDM_POST_DATAPUMP_SRC ................. COMPLETED
ZDM_POST_DATAPUMP_TGT ................. COMPLETED
ZDM_POST_ACTIONS ...................... COMPLETED
ZDM_CLEANUP_SRC ....................... COMPLETED

ZDM log file:

[zdmuser@zdmhost ~]$ tail -37 /u01/app/oracle/chkbase/scheduled/job-29-2024-02-07-08:22:03.log
####################################################################
zdmhost: 2024-02-07T07:56:33.580Z : Resuming zero downtime migrate operation ...
zdmhost: 2024-02-07T07:56:33.587Z : Starting zero downtime migrate operation ...
zdmhost: 2024-02-07T07:56:37.205Z : Fetching details of on premises Exadata Database "ONPRZ_APP_001T_PRI.domain.com"
zdmhost: 2024-02-07T07:56:37.205Z : Type of database : "Exadata at Customer"
zdmhost: 2024-02-07T07:56:37.283Z : Skipping phase ZDM_VALIDATE_SRC on resume
zdmhost: 2024-02-07T07:56:37.365Z : Skipping phase ZDM_SETUP_SRC on resume
zdmhost: 2024-02-07T07:56:37.377Z : Skipping phase ZDM_PRE_MIGRATION_ADVISOR on resume
zdmhost: 2024-02-07T07:56:37.391Z : Skipping phase ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC on resume
zdmhost: 2024-02-07T07:56:37.406Z : Skipping phase ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT on resume
zdmhost: 2024-02-07T07:56:37.422Z : Skipping phase ZDM_PREPARE_DATAPUMP_SRC on resume
zdmhost: 2024-02-07T07:56:37.437Z : Skipping phase ZDM_DATAPUMP_ESTIMATE_SRC on resume
zdmhost: 2024-02-07T07:56:37.455Z : Skipping phase ZDM_PREPARE_DATAPUMP_TGT on resume
zdmhost: 2024-02-07T07:56:37.471Z : Executing phase ZDM_PARALLEL_EXPORT_IMPORT
zdmhost: 2024-02-07T07:56:37.482Z : Skipping phase ZDM_DATAPUMP_EXPORT_SRC_BATCH1 on resume
zdmhost: 2024-02-07T07:56:37.485Z : Skipping phase ZDM_TRANSFER_DUMPS_SRC_BATCH1 on resume
zdmhost: 2024-02-07T07:56:37.487Z : Executing phase ZDM_DATAPUMP_IMPORT_TGT_BATCH1
zdmhost: 2024-02-07T07:56:38.368Z : listing directory path /mnt/nfs_nfs_share/ONPR on node ExaCC-cl01n1.domain.com ...
zdmhost: 2024-02-07T07:56:39.474Z : Oracle Data Pump Import parallelism set to 1 ...
zdmhost: 2024-02-07T07:56:39.481Z : Oracle Data Pump errors to be ignored are ORA-31684,ORA-39111,ORA-39082...
zdmhost: 2024-02-07T07:56:39.481Z : starting Data Pump Import for database "ONPRZ_APP_001T.DOMAIN.COM"
zdmhost: 2024-02-07T07:56:39.482Z : running Oracle Data Pump job "ZDM_29_DP_IMPORT_5005_BATCH1" for database "ONPRZ_APP_001T.DOMAIN.COM"
zdmhost: 2024-02-07T08:00:46.569Z : Oracle Data Pump job "ZDM_29_DP_IMPORT_5005_BATCH1" for database "ONPRZ_APP_001T.DOMAIN.COM" completed.
zdmhost: 2024-02-07T08:00:46.569Z : Oracle Data Pump log located at /mnt/nfs_nfs_share/ONPR/ZDM_29_DP_IMPORT_5005_BATCH1.log in the Database Server node
zdmhost: 2024-02-07T08:01:17.239Z : Execution of phase ZDM_DATAPUMP_IMPORT_TGT_BATCH1 completed
zdmhost: 2024-02-07T08:01:17.248Z : Execution of phase ZDM_PARALLEL_EXPORT_IMPORT completed
zdmhost: 2024-02-07T08:01:17.268Z : Executing phase ZDM_POST_DATAPUMP_SRC
zdmhost: 2024-02-07T08:01:17.272Z : listing directory path /mnt/nfs_share/ONPR/ on node vmonpr.domain.com ...
zdmhost: 2024-02-07T08:01:17.811Z : deleting Data Pump dump in directory path /mnt/nfs_share/ONPR/ on node vmonpr.domain.com ...
zdmhost: 2024-02-07T08:01:19.052Z : Execution of phase ZDM_POST_DATAPUMP_SRC completed
zdmhost: 2024-02-07T08:01:19.070Z : Executing phase ZDM_POST_DATAPUMP_TGT
zdmhost: 2024-02-07T08:01:19.665Z : Execution of phase ZDM_POST_DATAPUMP_TGT completed
zdmhost: 2024-02-07T08:01:19.689Z : Executing phase ZDM_POST_ACTIONS
zdmhost: 2024-02-07T08:01:19.693Z : Execution of phase ZDM_POST_ACTIONS completed
zdmhost: 2024-02-07T08:01:19.716Z : Executing phase ZDM_CLEANUP_SRC
zdmhost: 2024-02-07T08:01:20.213Z : Cleaning up ZDM on the source node vmonpr ...
zdmhost: 2024-02-07T08:01:21.458Z : Execution of phase ZDM_CLEANUP_SRC completed
[zdmuser@zdmhost ~]$

If we check the ZDM import log saved on the NFS shared folder, here named ZDM_32_DP_IMPORT_1847_BATCH1.log, we would see that the import has been done successfully with 3 errors. The 3 errors are displayed in the same log file and are:

09-FEB-24 10:00:22.534: W-1 Processing object type SCHEMA_EXPORT/USER
09-FEB-24 10:00:22.943: ORA-31684: Object type USER:"USER1" already exists
09-FEB-24 10:00:22.943: ORA-31684: Object type USER:"USER2" already exists
09-FEB-24 10:00:22.943: ORA-31684: Object type USER:"USER3" already exists

These errors are here because we created the user on the ExaCC target DB previously to resuming zdmcli job, thus before performing the import again. These errors are fortunately part of the list that ZDM would ignore, which make senses.

Checks

We can then of course do some tests as comparing the number of objects for the migrated users on the source and the target, checking pdb violation, checking invalid objects, ensuring that tablespace are encrypted on the ExaCC target DB, and so on.

To compare number of objects:

SQL> select owner, count(*) from dba_objects where owner in ('USER1','USER2','USER3') group by owner order by 1;

OWNER             COUNT(*)
--------------- ----------
USER3                758
USER2                760

To check that tablespace are encrypted:

SQL> select a.con_id, a.tablespace_name, nvl(b.ENCRYPTIONALG,'NOT ENCRYPTED') from  cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x,  V$ENCRYPTED_TABLESPACES y where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) order by 1,2;

To check pdb violations:

SQL> select status, message from pdb_plug_in_violations;

To check invalid objects:

SQL> select count(*) from dba_invalid_objects;

And we could, of course, if needed, relocate the PDB to another ExaCC CDB.

Conclusion

That’s it. We could easily migrate a single-tenant on-premise database to ExaCC PDB using ZDM Logical Offline. The tools really have advantages. We do not need to deal with any oracle command, like running datapump on ourselves.

In the next blog I will show you we migrated on-premises database to ExaCC on our customer system using ZDM Physical Online Migration.