I have been recently blogging about ZDM Physical Online Migration where I was explaining this migration method and preparing customer On-Premises to ExaCC migration. See my blog, https://www.dbi-services.com/blog/physical-online-migration-to-exacc-with-oracle-zero-downtime-migration-zdm/. I have now had the opportunity to migrate customer database and would like to share, in this blog, this success story.
Read more: ZDM Physical Online Migration – A success storyClarifications
First of all, I would encourage you to read the previous mentioned blog to understand the requirements, the preparation steps, the migration steps and the parameters used during ZDM Physical Online Migration. This blog is not intended to reexplain the same information already presented in the previous one, but just to show in detail the success story of the migration, and how the migration has been performed.
Compared to the previous migration preparation, detailed in previous blog, there are 3 parameters that we are not going to finally activate:
- SKIP_FALLBACK: In fact, even in this migration method, there is no real fallback possible, and if a real fallback is needed, it will have to be put in place by the user. When I’m talking about fallback, I think about having customer the possibility after a few days of using the new environment in production to switch back to the previous on-premises environment, in case there is, for example, any performance problem. In our case here, with ZDM migration, it will anyhow not be possible, knowing we are converting the non-cdb on-premises database to a pdb database on the new ExaCC. Building a Data Guard environment between a non-cdb and a cdb environment is in any case not possible. In our case, we did some dry run, giving customer the possibility to test the new ExaCC environment before the migration, to ensure all is ok, performance wise and so on. In our procedure, we will include a fallback, but this fallback will give the possibility to activate again the on-premises database in case the application tests is not successful after the migration and before putting the new PDB on the ExaCC in production.
- TGT_RETAIN_DB_UNIQUE_NAME: This is in fact a legacy parameter that was mandatory for the first generation of the ExaCC, because the platform was setting db_unique_name the same as dbname. This parameter is now not needed any more and is deprecated.
- TGT_SKIP_DATAPATCH: We faced several bugs having the source database in 19.10 or 19.16. We have then decided to patch the on-premises source database to the same last version as running on the ExaCC, that is to say 19.22. In this case, we do not need this datapatch steps, and we will then set this one to true value, in order to skip it.
Finally, I would like to recall that I have anonymised all outputs to remove customer infrastructure names. So let’s take the same previous convention :
ExaCC Cluster 01 node 01 : ExaCC-cl01n1
ExaCC Cluster 01 node 02 : ExaCC-cl01n2
On premises Source Host : vmonpr
Target db_unique_name on the ExaCC : ONPR_RZ2
Database Name to migrate : ONPR
ZDM Host : zdmhost
ZDM user : zdmuser
Domain : domain.com
ExaCC PDB to migrate to : ONPRZ_APP_001T
The on-premise database is a single-tenant (non-cdb) database running version 19.22.
The target database is an Oracle RAC database running on ExaCC with Oracle version 19.22.
The Oracle Net port used on the on-premise site is 13000 and the Oracle Net port used on the ExaCC is 1521.
We will use ZDM to migrate the on-premise single-tenant database, to a PDB within a CDB. ZDM will then be in charge of migrating the database to the exacc using Data Guard, convert non-cdb database to pdb within a target cdb, upgrade Time Zone. The creation of the standby database will be done through a direct connection. Without any backup.
We will then migrate on-premises Single-Tenant database, named ONPR, to a PDB on the ExaCC. The PDB will be named ONPRZ_APP_001T.
It is also mandatory to have a valid backup of the source database, before we start the migration in the maintenance windows, in case we are having any issue during fallback. In any case it is always mandatory for any migration, patch or upgrade to have a valid backup of the production database.
ZDM patch version
We were initially using 21.1.4.0.0 ZDM version:
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli -build version: 21.0.0.0.0 full version: 21.4.0.0.0 patch version: 21.4.1.0.0 label date: 221207.25 ZDM kit build date: Jul 31 2023 14:24:25 UTC CPAT build version: 23.7.0
But we faced a major bug, where ZDM was incorrectly exporting database master key into the new ExaCC wallet. The migration was failing with following error once ZDM was trying to open the new created PDB: ALTER PLUGGABLE DATABASE OPEN
*
ERROR at line 1:
ORA-28374: typed master key not found in wallet
Checking the masterkeyid from v$database_key_info, and the recorded key in the keystore, I could confirm that on the source, the database master key was correctly recorded in the keystore, but not on the target side. This was later confirmed to be a ZDM bug solved in last patch: 21.4.5.0.0 that was just released.
We patched ZDM software on the ZDM host, and are now running following version:
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli -build version: 21.0.0.0.0 full version: 21.4.0.0.0 patch version: 21.4.5.0.0 label date: 221207.30 ZDM kit build date: Mar 21 2024 22:07:12 UTC CPAT build version: 23.12.0
As we can see later in the blog, this new version is bringing some new bugs that we can easily workaround.
Prepare ZDM response file
We have been configuring accordingly following parameters in the response file. For any explanation of used parameter, read my previous mentioned article.
[zdmuser@zdmhost migration]$ vi zdm_ONPR_physical_online.rsp [zdmuser@zdmhost migration]$ diff zdm_ONPR_physical_online.rsp /u01/app/oracle/product/zdm/rhp/zdm/template/zdm_template.rsp 24c24 < TGT_DB_UNIQUE_NAME=ONPR_RZ2 --- > TGT_DB_UNIQUE_NAME= 32c32 < MIGRATION_METHOD=ONLINE_PHYSICAL --- > MIGRATION_METHOD= 63c63 < DATA_TRANSFER_MEDIUM=DIRECT --- > DATA_TRANSFER_MEDIUM= 75c75 < PLATFORM_TYPE=EXACC --- > PLATFORM_TYPE= 119c119 < SRC_DB_LISTENER_PORT=13000 --- > SRC_DB_LISTENER_PORT= 230c230 < NONCDBTOPDB_CONVERSION=TRUE --- > NONCDBTOPDB_CONVERSION=FALSE 252c252 < SKIP_FALLBACK=TRUE --- > SKIP_FALLBACK= 268c268 < TGT_RETAIN_DB_UNIQUE_NAME=FALSE --- > TGT_RETAIN_DB_UNIQUE_NAME= 279c279 < TGT_SKIP_DATAPATCH=TRUE --- > TGT_SKIP_DATAPATCH=FALSE 312c312 < SHUTDOWN_SRC=FALSE --- > SHUTDOWN_SRC= 333c333 < SRC_RMAN_CHANNELS=2 --- > SRC_RMAN_CHANNELS= 340c340 < TGT_RMAN_CHANNELS=2 --- > TGT_RMAN_CHANNELS= 526c526 < ZDM_USE_DG_BROKER=TRUE --- > ZDM_USE_DG_BROKER= 574c574 < ZDM_NONCDBTOPDB_PDB_NAME=ONPRZ_APP_001T --- > ZDM_NONCDBTOPDB_PDB_NAME= 595c595 < ZDM_TGT_UPGRADE_TIMEZONE=TRUE --- > ZDM_TGT_UPGRADE_TIMEZONE=FALSE
Passwordless Login
See my previous blogs for more information about this subject. It is mandatory for ZDM to have passwordless login configured between ZDM host and the source and target nodes.
For both ExaCC nodes, it was already done:
[zdmuser@zdmhost migration]$ ssh opc@ExaCC-cl01n1 Last login: Wed Mar 20 10:44:09 2024 from 10.160.52.122 [opc@ExaCC-cl01n1 ~]$ [zdmuser@zdmhost migration]$ ssh opc@ExaCC-cl01n2 Last login: Wed Mar 20 10:32:33 2024 from 10.160.52.122 [opc@ExaCC-cl01n2 ~]$
For new on-premises database, we configured it with the source node, adding the id_rsa.pub content from the ZDM host to the authorized_keys on the source:
[zdmuser@zdmhost .ssh]$ cat id_rsa.pub ssh-rsa AAAAB3**************************3CW20= zdmuser@zdmhost [zdmuser@zdmhost .ssh]$ oracle@vmonpr:/home/oracle/.ssh/ [ONPR] echo "ssh-rsa AAAAB3*******************3CW20= zdmuser@zdmhost" >> authorized_keys
Checking, it is working as expected:
[zdmuser@zdmhost .ssh]$ ssh oracle@vmonpr Last login: Wed Mar 20 11:33:46 2024 oracle@vmonpr:/home/oracle/ [RDBMS12201_EE_190716]
Create target database on the ExaCC
As explained in the requirements and my other blog articles, we must create a target CDB on the ExaCC with same DB_NAME as the source database to be migrated but other DB_UNIQUE_NAME. In our case it will be ONPR for the DB_NAME and ONPR_RZ2 for the DB_UNIQUE_NAME. This database must exist before the migration is started with ZDM. ZDM will create another temporary database taking the final PDB name and will use this target CDB as a template.
Convert ExaCC target database to single instance
The target database on the ExaCC is a RAC database with 2 hosts. So we have got 2 instances for the same database. We therefore have 2 UNDO tablespaces. The on-premises database is a single instance one, so having currently only 1 UNDO tablespace. In order to avoid ZDM to create an additional UNDO tablespace on the on-premises single instance database, we will convert the target database from RAC to single instance.
We will first update cluster_database instance parameter to false value.
SQL> select inst_id, name, value from gv$parameter where lower(name)='cluster_database'; INST_ID NAME VALUE ---------- -------------------- --------------- 2 cluster_database TRUE 1 cluster_database TRUE SQL> alter system set cluster_database=FALSE scope=spfile sid='*'; System altered.
We will stop the target database:
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2 Instance ONPR1 is running on node ExaCC-cl01n1 Instance ONPR2 is running on node ExaCC-cl01n2 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl stop database -d ONPR_RZ2 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2 Instance ONPR1 is not running on node ExaCC-cl01n1 Instance ONPR2 is not running on node ExaCC-cl01n2
And remove the second node from the grid infra configuration:
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl remove instance -d ONPR_RZ2 -i ONPR2 Remove instance from the database ONPR_RZ2? (y/[n]) y
And restart the database:
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl start database -d ONPR_RZ2 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2 Instance ONPR1 is running on node ExaCC-cl01n1
We can also check with gv$parameter and see we only have one running instance:
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Mar 20 10:51:33 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.22.0.0.0 SQL> select inst_id, name, value from gv$parameter where lower(name)='cluster_database'; INST_ID NAME VALUE ---------- -------------------- --------------- 1 cluster_database FALSE
Configure Transparent Data Encyption (TDE) on the source database
See my previous blog for detailed explanation.
In our case, knowing we do not have any Oracle Advanced Security License for our on-premise, we will ensure to set the TABLESPACE_ENCRYPTION instance parameter to DECRYPT_ONLY value and ensure to set the ENCRYPT_NEW_TABLESPACES on either CLOUD_ONLY or DDL value.
SQL> !ls /u00/app/oracle/admin/ONPR/wallet ls: cannot access /u00/app/oracle/admin/ONPR/wallet: No such file or directory SQL> !mkdir /u00/app/oracle/admin/ONPR/wallet SQL> !ls /u00/app/oracle/admin/ONPR/wallet SQL> alter system set WALLET_ROOT='/u00/app/oracle/admin/ONPR/wallet' scope=spfile; System altered. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 2147479664 bytes Fixed Size 8941680 bytes Variable Size 1224736768 bytes Database Buffers 905969664 bytes Redo Buffers 7831552 bytes Database mounted. Database opened. SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS ------------------------------ -------------------- -------------------- ------------------------------ FILE UNKNOWN NOT_AVAILABLE SQL> alter system set tde_configuration='keystore_configuration=FILE' scope=both; System altered. SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS -------------------------------------------------- -------------------- -------------------- ------------------------------ /u00/app/oracle/admin/ONPR/wallet/tde/ FILE UNKNOWN NOT_AVAILABLE SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE IDENTIFIED BY "*****************"; keystore altered. SQL> !ls -ltrh /u00/app/oracle/admin/ONPR/wallet/tde total 4.0K -rw-------. 1 oracle dba 2.5K Mar 20 11:11 ewallet.p12 SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS -------------------------------------------------- -------------------- -------------------- ------------------------------ /u00/app/oracle/admin/ONPR/wallet/tde/ FILE UNKNOWN CLOSED SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "*****************"; keystore altered. SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS -------------------------------------------------- -------------------- -------------------- ------------------------------ /u00/app/oracle/admin/ONPR/wallet/tde/ FILE PASSWORD OPEN_NO_MASTER_KEY SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "***************" with backup; keystore altered. SQL> !ls -ltrh /u00/app/oracle/admin/ONPR/wallet/tde total 8.0K -rw-------. 1 oracle dba 2.5K Mar 20 11:14 ewallet_2024032010143867.p12 -rw-------. 1 oracle dba 3.9K Mar 20 11:14 ewallet.p12 SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u00/app/oracle/admin/ONPR/wallet/tde/' IDENTIFIED BY "*****************"; keystore altered. SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY "****************"; keystore altered. SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET; WRL_PARAMETER WRL_TYPE WALLET_TYPE STATUS -------------------------------------------------- -------------------- -------------------- ------------------------------ /u00/app/oracle/admin/ONPR/wallet/tde/ FILE AUTOLOGIN OPEN SQL> !ls -ltrh /u00/app/oracle/admin/ONPR/wallet/tde total 12K -rw-------. 1 oracle dba 2.5K Mar 20 11:14 ewallet_2024032010143867.p12 -rw-------. 1 oracle dba 3.9K Mar 20 11:14 ewallet.p12 -rw-------. 1 oracle dba 4.0K Mar 20 11:15 cwallet.sso SQL> show parameter tablespace_encryption NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ tablespace_encryption string MANUAL_ENABLE SQL> alter system set tablespace_encryption='decrypt_only' scope=spfile; System altered. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 2147479664 bytes Fixed Size 8941680 bytes Variable Size 1224736768 bytes Database Buffers 905969664 bytes Redo Buffers 7831552 bytes Database mounted. Database opened. SQL> show parameter encrypt NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ encrypt_new_tablespaces string DDL tablespace_encryption string DECRYPT_ONLY
Update source database SYS password
Source database SYS password should match target one.
SQL> alter user sys identified by "********"; User altered.
Update listener configuration on the source
The source database is not a grid infra one. A static DGMGRL entry is then mandatory for Data Guard to restart old primary during a switchover operation. Unfortunately, ZDM does not configure it on his side.
We will add following entry into the source listener:
SID_LIST_lsr1ONPR =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ONPR_DGMGRL.domain.com)
(ORACLE_HOME = /u00/app/oracle/product/19.22.0.0.240116.EE)
(SID_NAME = ONPR)
)
)
oracle@vmonpr:/u00/app/oracle/product/19.22.0.0.240116.EE/network/admin/ [ONPR] ls -l total 0 lrwxrwxrwx. 1 oracle dba 38 Mar 8 09:46 ldap.ora -> /u00/app/oracle/network/admin/ldap.ora lrwxrwxrwx. 1 oracle dba 45 Mar 8 09:46 listener.ora -> /u00/app/oracle/network/admin/listenerV12.ora lrwxrwxrwx. 1 oracle dba 40 Mar 8 10:50 sqlnet.ora -> /u00/app/oracle/network/admin/sqlnet.ora lrwxrwxrwx. 1 oracle dba 42 Mar 8 09:46 tnsnames.ora -> /u00/app/oracle/network/admin/tnsnames.ora oracle@vmonpr:/u00/app/oracle/network/admin/ [ONPR] cp -p listenerV12.ora listenerV12.ora.bak.20240320 oracle@vmonpr:/u00/app/oracle/network/admin/ [ONPR] vil oracle@vmonpr:/u00/app/oracle/network/admin/ [ONPR] diff listenerV12.ora listenerV12.ora.bak.20240320 85,94d84 < # For ZDM Migration < SID_LIST_lsr1ONPR = < (SID_LIST = < (SID_DESC = < (GLOBAL_DBNAME = ONPR_DGMGRL.domain.com) < (ORACLE_HOME = /u00/app/oracle/product/19.22.0.0.240116.EE) < (SID_NAME = ONPR) < ) < ) < oracle@vmonpr:/u00/app/oracle/network/admin/ [ONPR] lsnrctl reload lsr1ONPR LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 20-MAR-2024 14:15:41 Copyright (c) 1991, 2023, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vmonpr.domain.com)(PORT=13000))) The command completed successfully oracle@vmonpr:/u00/app/oracle/network/admin/ [ONPR] lsnrctl status lsr1ONPR | grep -i dgmgrl Service "ONPR_DGMGRL.domain.com" has 1 instance(s).
Archived log
Data Guard is using the REDO vector to synchronise the standby database. It is then mandatory to keep available, during the whole migration process, the archived log files that have not been applied on the standby database.
Checking target space
We need to check that there is enough space in the DATA disk group of the ASM at the ExaCC side.
[grid@ExaCC-cl01n1 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH N 512 512 4096 4194304 43409664 16203032 0 5401010 0 Y DATAC1/ MOUNTED HIGH N 512 512 4096 4194304 14469120 13175436 0 4391812 0 N RECOC1/ MOUNTED HIGH N 512 512 4096 4194304 144691200 144688128 0 48229376 0 N SPRC1/
Run an evaluation of the migration
Before running the migration it is mandatory to test it. This is done by using the option -eval.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -eval zdmhost.domain.com: Audit ID: 1138 Enter source database ONPR SYS password: Enter source database ONPR TDE keystore password: Enter target container database TDE keystore password: zdmhost: 2024-03-22T13:04:06.449Z : Processing response file ... Operation "zdmcli migrate database" scheduled with the job ID "73".
Checking the job we can see that it is in the current status SUCCEEDED and that all precheck phases have been passed successfully.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 73 zdmhost.domain.com: Audit ID: 1146 Job ID: 73 User: zdmuser Client: zdmhost Job Type: "EVAL" Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -eval" Scheduled job execution start time: 2024-03-22T14:04:06+01. Equivalent local time: 2024-03-22 14:04:06 Current status: SUCCEEDED Result file path: "/u01/app/oracle/chkbase/scheduled/job-73-2024-03-22-14:04:13.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-73-2024-03-22-14:04:13.json" Job execution start time: 2024-03-22 14:04:13 Job execution end time: 2024-03-22 14:25:20 Job execution elapsed time: 9 minutes 13 seconds ZDM_GET_SRC_INFO ........... PRECHECK_PASSED ZDM_GET_TGT_INFO ........... PRECHECK_PASSED ZDM_PRECHECKS_SRC .......... PRECHECK_PASSED ZDM_PRECHECKS_TGT .......... PRECHECK_PASSED ZDM_SETUP_SRC .............. PRECHECK_PASSED ZDM_SETUP_TGT .............. PRECHECK_PASSED ZDM_PREUSERACTIONS ......... PRECHECK_PASSED ZDM_PREUSERACTIONS_TGT ..... PRECHECK_PASSED ZDM_VALIDATE_SRC ........... PRECHECK_PASSED ZDM_VALIDATE_TGT ........... PRECHECK_PASSED ZDM_POSTUSERACTIONS ........ PRECHECK_PASSED ZDM_POSTUSERACTIONS_TGT .... PRECHECK_PASSED ZDM_CLEANUP_SRC ............ PRECHECK_PASSED ZDM_CLEANUP_TGT ............ PRECHECK_PASSED
Run the migration and pause after ZDM_CONFIGURE_DG_SRC phase
We will now start the migration until Data Guard is setup. We will then run the migration and pause it after Data Guard is configured, using the option -pauseafter ZDM_CONFIGURE_DG_SRC
.
This will allow us to prepare the migration, have the standby database created on the ExaCC and Data Guard configured with no downtime. We do not need any maintenance window to perform this part of the migration.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC zdmhost.domain.com: Audit ID: 1156 Enter source database ONPR SYS password: Enter source database ONPR TDE keystore password: Enter target container database TDE keystore password: zdmhost: 2024-03-22T14:29:56.176Z : Processing response file ... Operation "zdmcli migrate database" scheduled with the job ID "75".
Checking the job we can see it is failing at the restore phase.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 75 zdmhost.domain.com: Audit ID: 1160 Job ID: 75 User: zdmuser Client: zdmhost Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC" Scheduled job execution start time: 2024-03-22T15:29:56+01. Equivalent local time: 2024-03-22 15:29:56 Current status: FAILED Result file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.json" Job execution start time: 2024-03-22 15:30:13 Job execution end time: 2024-03-22 15:41:05 Job execution elapsed time: 8 minutes 2 seconds ZDM_GET_SRC_INFO ................ COMPLETED ZDM_GET_TGT_INFO ................ COMPLETED ZDM_PRECHECKS_SRC ............... COMPLETED ZDM_PRECHECKS_TGT ............... COMPLETED ZDM_SETUP_SRC ................... COMPLETED ZDM_SETUP_TGT ................... COMPLETED ZDM_PREUSERACTIONS .............. COMPLETED ZDM_PREUSERACTIONS_TGT .......... COMPLETED ZDM_VALIDATE_SRC ................ COMPLETED ZDM_VALIDATE_TGT ................ COMPLETED ZDM_DISCOVER_SRC ................ COMPLETED ZDM_COPYFILES ................... COMPLETED ZDM_PREPARE_TGT ................. COMPLETED ZDM_SETUP_TDE_TGT ............... COMPLETED ZDM_RESTORE_TGT ................. FAILED ZDM_RECOVER_TGT ................. PENDING ZDM_FINALIZE_TGT ................ PENDING ZDM_CONFIGURE_DG_SRC ............ PENDING ZDM_SWITCHOVER_SRC .............. PENDING ZDM_SWITCHOVER_TGT .............. PENDING ZDM_POST_DATABASE_OPEN_TGT ...... PENDING ZDM_NONCDBTOPDB_PRECHECK ........ PENDING ZDM_NONCDBTOPDB_CONVERSION ...... PENDING ZDM_POST_MIGRATE_TGT ............ PENDING TIMEZONE_UPGRADE_PREPARE_TGT .... PENDING TIMEZONE_UPGRADE_TGT ............ PENDING ZDM_POSTUSERACTIONS ............. PENDING ZDM_POSTUSERACTIONS_TGT ......... PENDING ZDM_CLEANUP_SRC ................. PENDING ZDM_CLEANUP_TGT ................. PENDING Pause After Phase: "ZDM_CONFIGURE_DG_SRC"
ZDM logs is showing following:
PRGO-4086 : failed to query the "VERSION" details from the view "V$INSTANCE" for database "ONPRZ_APP_001T"
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 13:34:55 2024
Version 19.22.0.0.0
Copyright (c) 1982, 2023, Oracle. All rights reserved.
Connected to an idle instance.
alter session set NLS_LANGUAGE='AMERICAN'
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
This is due to a new bug on the latest ZDM patch, where ZDM is incorrectly setting audit_trail instance parameter for the temporary instance called ONPRZ_APP_001T (final PDB name). As we can see the parameter is set to EXTENDED, which is absolutely not possible. It is either DB or DB, EXTENDED. EXTENDED alone is not an approrpriate value. This is why the instance can not be started.
SQL> !strings /u02/app/oracle/product/19.0.0.0/dbhome_1/dbs/spfileONPRZ_APP_001T1.ora | grep -i audit_trail *.audit_trail='EXTENDED'
On the source database, the parameter is set as:
SQL> show parameter audit_trail NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_trail string DB, EXTENDED
And on the target database, the parameter is set as:
SQL> show parameter audit_trail NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_trail string DB
So we just need to create a pfile from spfile, update it accordingly and set back the spfile from updated pfile.
SQL> create pfile='/tmp/initONPRZ_APP_001T1_prob.ora' from spfile='/u02/app/oracle/product/19.0.0.0/dbhome_1/dbs/spfileONPRZ_APP_001T1.ora'; File created. SQL> !vi /tmp/initONPRZ_APP_001T1_prob.ora SQL> !grep -i audit_trail /tmp/initONPRZ_APP_001T1_prob.ora *.audit_trail='DB','EXTENDED' SQL> create spfile='/u02/app/oracle/product/19.0.0.0/dbhome_1/dbs/spfileONPRZ_APP_001T1.ora' from pfile='/tmp/initONPRZ_APP_001T1_prob.ora'; File created.
And we will resume ZDM job.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli resume job -jobid 75 zdmhost.domain.com: Audit ID: 1163
Checking the job, we can see that all phases until switchover have been completed successfully, and the job is in pause status.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 75 zdmhost.domain.com: Audit ID: 1164 Job ID: 75 User: zdmuser Client: zdmhost Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC" Scheduled job execution start time: 2024-03-22T15:29:56+01. Equivalent local time: 2024-03-22 15:29:56 Current status: PAUSED Current Phase: "ZDM_CONFIGURE_DG_SRC" Result file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.json" Job execution start time: 2024-03-22 15:30:13 Job execution end time: 2024-03-22 16:32:20 Job execution elapsed time: 19 minutes 44 seconds ZDM_GET_SRC_INFO ................ COMPLETED ZDM_GET_TGT_INFO ................ COMPLETED ZDM_PRECHECKS_SRC ............... COMPLETED ZDM_PRECHECKS_TGT ............... COMPLETED ZDM_SETUP_SRC ................... COMPLETED ZDM_SETUP_TGT ................... COMPLETED ZDM_PREUSERACTIONS .............. COMPLETED ZDM_PREUSERACTIONS_TGT .......... COMPLETED ZDM_VALIDATE_SRC ................ COMPLETED ZDM_VALIDATE_TGT ................ COMPLETED ZDM_DISCOVER_SRC ................ COMPLETED ZDM_COPYFILES ................... COMPLETED ZDM_PREPARE_TGT ................. COMPLETED ZDM_SETUP_TDE_TGT ............... COMPLETED ZDM_RESTORE_TGT ................. COMPLETED ZDM_RECOVER_TGT ................. COMPLETED ZDM_FINALIZE_TGT ................ COMPLETED ZDM_CONFIGURE_DG_SRC ............ COMPLETED ZDM_SWITCHOVER_SRC .............. PENDING ZDM_SWITCHOVER_TGT .............. PENDING ZDM_POST_DATABASE_OPEN_TGT ...... PENDING ZDM_NONCDBTOPDB_PRECHECK ........ PENDING ZDM_NONCDBTOPDB_CONVERSION ...... PENDING ZDM_POST_MIGRATE_TGT ............ PENDING TIMEZONE_UPGRADE_PREPARE_TGT .... PENDING TIMEZONE_UPGRADE_TGT ............ PENDING ZDM_POSTUSERACTIONS ............. PENDING ZDM_POSTUSERACTIONS_TGT ......... PENDING ZDM_CLEANUP_SRC ................. PENDING ZDM_CLEANUP_TGT ................. PENDING Pause After Phase: "ZDM_CONFIGURE_DG_SRC"
We can check and confirm that the standby (ExaCC) is configured with the primary (on-premises) with no gap.
oracle@vmonpr:/home/oracle/ [ONPR] dgmgrl DGMGRL for Linux: Release 19.0.0.0.0 - Production on Fri Mar 22 16:40:23 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect / Connected to "ONPR" Connected as SYSDG. DGMGRL> show configuration lag Configuration - ZDM_onpr Protection Mode: MaxPerformance Members: onpr - Primary database onprz_app_001t - Physical standby database Transport Lag: 0 seconds (computed 1 second ago) Apply Lag: 0 seconds (computed 1 second ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 22 seconds ago)
Run the migration and pause after ZDM_SWITCHOVER_TGT phase (switchover)
We now need the maintenance windows. We will now have ZDM run a switchover. For this, we just need to resume the job and pause it after ZDM_SWITCHOVER_TGT phase.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli resume job -jobid 75 -pauseafter ZDM_SWITCHOVER_TGT zdmhost.domain.com: Audit ID: 1165
If we check the job status, we can see that the job is failing on the ZDM_SWITCHOVER_SRC phase.
[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 75 zdmhost.domain.com: Audit ID: 1166 Job ID: 75 User: zdmuser Client: zdmhost Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC" Scheduled job execution start time: 2024-03-22T15:29:56+01. Equivalent local time: 2024-03-22 15:29:56 Current status: FAILED Result file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.json" Job execution start time: 2024-03-22 15:30:13 Job execution end time: 2024-03-22 16:43:34 Job execution elapsed time: 22 minutes 4 seconds ZDM_GET_SRC_INFO ................ COMPLETED ZDM_GET_TGT_INFO ................ COMPLETED ZDM_PRECHECKS_SRC ............... COMPLETED ZDM_PRECHECKS_TGT ............... COMPLETED ZDM_SETUP_SRC ................... COMPLETED ZDM_SETUP_TGT ................... COMPLETED ZDM_PREUSERACTIONS .............. COMPLETED ZDM_PREUSERACTIONS_TGT .......... COMPLETED ZDM_VALIDATE_SRC ................ COMPLETED ZDM_VALIDATE_TGT ................ COMPLETED ZDM_DISCOVER_SRC ................ COMPLETED ZDM_COPYFILES ................... COMPLETED ZDM_PREPARE_TGT ................. COMPLETED ZDM_SETUP_TDE_TGT ............... COMPLETED ZDM_RESTORE_TGT ................. COMPLETED ZDM_RECOVER_TGT ................. COMPLETED ZDM_FINALIZE_TGT ................ COMPLETED ZDM_CONFIGURE_DG_SRC ............ COMPLETED ZDM_SWITCHOVER_SRC .............. FAILED ZDM_SWITCHOVER_TGT .............. PENDING ZDM_POST_DATABASE_OPEN_TGT ...... PENDING ZDM_NONCDBTOPDB_PRECHECK ........ PENDING ZDM_NONCDBTOPDB_CONVERSION ...... PENDING ZDM_POST_MIGRATE_TGT ............ PENDING TIMEZONE_UPGRADE_PREPARE_TGT .... PENDING TIMEZONE_UPGRADE_TGT ............ PENDING ZDM_POSTUSERACTIONS ............. PENDING ZDM_POSTUSERACTIONS_TGT ......... PENDING ZDM_CLEANUP_SRC ................. PENDING ZDM_CLEANUP_TGT ................. PENDING Pause After Phase: "ZDM_SWITCHOVER_TGT"
This is another ZDM bug if configured to use the broker. See my blog here for explanation: https://www.dbi-services.com/blog/zdm-physical-online-migration-failing-during-zdm_switchover_src-phase/
ZDM log is showing:
PRGZ-3605 : Oracle Data Guard Broker switchover to database "ONPRZ_APP_001T" on database "ONPR" failed.
ONPRZ_APP_001T
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Fri Mar 22 15:43:19 2024
Version 19.22.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "ONPR"
Connected as SYSDG.
DGMGRL> Performing switchover NOW, please wait...
Operation requires a connection to database "onprz_app_001t"
Connecting ...
Connected to "ONPRZ_APP_001T"
Connected as SYSDBA.
New primary database "onprz_app_001t" is opening...
Operation requires start up of instance "ONPR" on database "onpr"
Starting instance "ONPR"...
ORA-01017: invalid username/password; logon denied
Please complete the following steps to finish switchover:
start up and mount instance "ONPR" of database "onpr"
The temporary database on the ExaCC is having PRIMARY role and is opened READ/WRITE.
oracle@ExaCC-cl01n1:/u02/app/oracle/zdm/zdm_ONPR_RZ2_75/zdm/log/ [ONPR1 (CDB$ROOT)] ps -ef | grep [p]mon | grep -i sand oracle 51392 1 0 14:23 ? 00:00:00 ora_pmon_ONPRZ_APP_001T1 oracle@ExaCC-cl01n1:/u02/app/oracle/zdm/zdm_ONPR_RZ2_75/zdm/log/ [ONPR1 (CDB$ROOT)] export ORACLE_SID=ONPRZ_APP_001T1 oracle@ExaCC-cl01n1:/u02/app/oracle/zdm/zdm_ONPR_RZ2_75/zdm/log/ [ONPRZ_APP_001T1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 16:45:32 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.22.0.0.0 SQL> select open_mode, database_role from v$database; OPEN_MODE DATABASE_ROLE -------------------- ---------------- READ WRITE PRIMARY
The on-premises database is stopped. Let’s start it in MOUNT status.
oracle@vmonpr:/u00/app/oracle/zdm/zdm_ONPR_75/zdm/log/ [ONPR] ONPR ********* dbi services Ltd. ********* STATUS : STOPPED ************************************* oracle@vmonpr:/u00/app/oracle/zdm/zdm_ONPR_75/zdm/log/ [ONPR] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 16:45:51 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to an idle instance. SQL> startup mount ORACLE instance started. Total System Global Area 2147479664 bytes Fixed Size 8941680 bytes Variable Size 1224736768 bytes Database Buffers 905969664 bytes Redo Buffers 7831552 bytes Database mounted.
And check Data Guard configuration is in sync with no gap.
oracle@vmonpr:/u00/app/oracle/zdm/zdm_ONPR_75/zdm/log/ [ONPR] dgmgrl DGMGRL for Linux: Release 19.0.0.0.0 - Production on Fri Mar 22 16:47:29 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect / Connected to "ONPR" Connected as SYSDG. DGMGRL> show configuration lag Configuration - ZDM_onpr Protection Mode: MaxPerformance Members: onprz_app_001t - Primary database onpr - Physical standby database Transport Lag: 0 seconds (computed 1 second ago) Apply Lag: 0 seconds (computed 1 second ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 16 seconds ago)
As we completed the whole ZDM_SWITCHOVER_SRC ZDM phase manually, we need to update ZDM metadata to change the phase to SUCCESS.
[zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ pwd /u01/app/oracle/chkbase/GHcheckpoints/vmonpr+ONPR+ExaCC-cl01n1 [zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ cp -p vmonpr+ONPR+ExaCC-cl01n1.xml vmonpr+ONPR+ExaCC-cl01n1.xml.20240322_prob_broker [zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ vi vmonpr+ONPR+ExaCC-cl01n1.xml [zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ diff vmonpr+ONPR+ExaCC-cl01n1.xml vmonpr+ONPR+ExaCC-cl01n1.xml.20240322_prob_broker 106c106 < <CHECKPOINT LEVEL="MAJOR" NAME="ZDM_SWITCHOVER_SRC" DESC="ZDM_SWITCHOVER_SRC" STATE="SUCCESS"/> --- > <CHECKPOINT LEVEL="MAJOR" NAME="ZDM_SWITCHOVER_SRC" DESC="ZDM_SWITCHOVER_SRC" STATE="START"/>
We can resume the ZDM job again.
[zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ /u01/app/oracle/product/zdm/bin/zdmcli resume job -jobid 75 -pauseafter ZDM_SWITCHOVER_TGT zdmhost.domain.com: Audit ID: 1167
And see that all phases including the switchover are now completed successfully.
[zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 75 zdmhost.domain.com: Audit ID: 1168 Job ID: 75 User: zdmuser Client: zdmhost Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC" Scheduled job execution start time: 2024-03-22T15:29:56+01. Equivalent local time: 2024-03-22 15:29:56 Current status: PAUSED Current Phase: "ZDM_SWITCHOVER_TGT" Result file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.json" Job execution start time: 2024-03-22 15:30:13 Job execution end time: 2024-03-22 16:53:21 Job execution elapsed time: 24 minutes 42 seconds ZDM_GET_SRC_INFO ................ COMPLETED ZDM_GET_TGT_INFO ................ COMPLETED ZDM_PRECHECKS_SRC ............... COMPLETED ZDM_PRECHECKS_TGT ............... COMPLETED ZDM_SETUP_SRC ................... COMPLETED ZDM_SETUP_TGT ................... COMPLETED ZDM_PREUSERACTIONS .............. COMPLETED ZDM_PREUSERACTIONS_TGT .......... COMPLETED ZDM_VALIDATE_SRC ................ COMPLETED ZDM_VALIDATE_TGT ................ COMPLETED ZDM_DISCOVER_SRC ................ COMPLETED ZDM_COPYFILES ................... COMPLETED ZDM_PREPARE_TGT ................. COMPLETED ZDM_SETUP_TDE_TGT ............... COMPLETED ZDM_RESTORE_TGT ................. COMPLETED ZDM_RECOVER_TGT ................. COMPLETED ZDM_FINALIZE_TGT ................ COMPLETED ZDM_CONFIGURE_DG_SRC ............ COMPLETED ZDM_SWITCHOVER_SRC .............. COMPLETED ZDM_SWITCHOVER_TGT .............. COMPLETED ZDM_POST_DATABASE_OPEN_TGT ...... PENDING ZDM_NONCDBTOPDB_PRECHECK ........ PENDING ZDM_NONCDBTOPDB_CONVERSION ...... PENDING ZDM_POST_MIGRATE_TGT ............ PENDING TIMEZONE_UPGRADE_PREPARE_TGT .... PENDING TIMEZONE_UPGRADE_TGT ............ PENDING ZDM_POSTUSERACTIONS ............. PENDING ZDM_POSTUSERACTIONS_TGT ......... PENDING ZDM_CLEANUP_SRC ................. PENDING ZDM_CLEANUP_TGT ................. PENDING Pause After Phase: "ZDM_SWITCHOVER_TGT"
ZDM log is showing some warnings.
zdmhost: 2024-03-22T15:52:03.091Z : Skipping phase ZDM_SWITCHOVER_SRC on resume
zdmhost: 2024-03-22T15:52:03.117Z : Executing phase ZDM_SWITCHOVER_TGT
zdmhost: 2024-03-22T15:52:03.117Z : Switching database ONPR_RZ2 on the target node ExaCC-cl01n1 to primary role ...
ExaCC-cl01n1: 2024-03-22T15:53:21.918Z : WARNING: Post migration, on-premise database will not be in sync with new primary of the database.
ExaCC-cl01n1: 2024-03-22T15:53:21.919Z : Switchover actions in the target environment executed successfully
zdmhost: 2024-03-22T15:53:21.923Z : Execution of phase ZDM_SWITCHOVER_TGT completed
####################################################################
zdmhost: 2024-03-22T15:53:21.936Z : Job execution paused after phase "ZDM_SWITCHOVER_TGT".
We can ignored this warning. Data Guard configuration is in sync and in any case we are not going to use any fallback.
Convert ExaCC target database back to RAC
We will now convert the ExaCC target database that will host the future PDB that ZDM will create to RAC. If we do not do so, the PDB will only have one UNDO tablespace and we will have some problem to relocate it then to the final CDB, which will be RAC anyhow.
oracle@ExaCC-cl01n1:~/ [ONPRZ_APP_001T1 (CDB$ROOT)] ONPR1 ********************************** INSTANCE_NAME : ONPR1 DB_NAME : ONPR DB_UNIQUE_NAME : ONPR_RZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/7 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.22.0.0.0 CDB_ENABLED : YES PDBs : PDB$SEED ********************************** PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only Statustime: 2024-03-21 09:30:32 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 16:56:19 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.22.0.0.0 SQL> select inst_id, name, value from gv$parameter where lower(name)='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database FALSE SQL> alter system set cluster_database=TRUE scope=spfile sid='*'; System altered. oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2 Instance ONPR1 is running on node ExaCC-cl01n1 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl stop database -d ONPR_RZ2 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2 Instance ONPR1 is not running on node ExaCC-cl01n1 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl add instance -d ONPR_RZ2 -i ONPR2 -node ExaCC-cl01n2 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl start database -d ONPR_RZ2 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2 Instance ONPR1 is running on node ExaCC-cl01n1 Instance ONPR2 is running on node ExaCC-cl01n2 oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 16:58:36 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.22.0.0.0 SQL> select inst_id, name, value from gv$parameter where lower(name)='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 2 cluster_database TRUE 1 cluster_database TRUE
Compatible parameter
Compatible parameter for a 19c database should, in main situation, be configured at 19.0.0. There is no real reason to change it to another more specific one. If it is the case, we need to change the target database compatible parameter, to have the source and target matching. This will also have to be taken in account for PDB relocation.
Run the migration
We can now run the ZDM job with no pause, so have it run until the end. This will include the non-cdb to cdb conversion and database timezone update.
[zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ /u01/app/oracle/product/zdm/bin/zdmcli resume job -jobid 75 zdmhost.domain.com: Audit ID: 1169
The job has been completed successfully.
[zdmuser@zdmhost vmonpr+ONPR+ExaCC-cl01n1]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 75 zdmhost.domain.com: Audit ID: 1172 Job ID: 75 User: zdmuser Client: zdmhost Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC" Scheduled job execution start time: 2024-03-22T15:29:56+01. Equivalent local time: 2024-03-22 15:29:56 Current status: SUCCEEDED Result file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.log" Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-75-2024-03-22-15:30:13.json" Job execution start time: 2024-03-22 15:30:13 Job execution end time: 2024-03-22 17:08:49 Job execution elapsed time: 33 minutes 18 seconds ZDM_GET_SRC_INFO ................ COMPLETED ZDM_GET_TGT_INFO ................ COMPLETED ZDM_PRECHECKS_SRC ............... COMPLETED ZDM_PRECHECKS_TGT ............... COMPLETED ZDM_SETUP_SRC ................... COMPLETED ZDM_SETUP_TGT ................... COMPLETED ZDM_PREUSERACTIONS .............. COMPLETED ZDM_PREUSERACTIONS_TGT .......... COMPLETED ZDM_VALIDATE_SRC ................ COMPLETED ZDM_VALIDATE_TGT ................ COMPLETED ZDM_DISCOVER_SRC ................ COMPLETED ZDM_COPYFILES ................... COMPLETED ZDM_PREPARE_TGT ................. COMPLETED ZDM_SETUP_TDE_TGT ............... COMPLETED ZDM_RESTORE_TGT ................. COMPLETED ZDM_RECOVER_TGT ................. COMPLETED ZDM_FINALIZE_TGT ................ COMPLETED ZDM_CONFIGURE_DG_SRC ............ COMPLETED ZDM_SWITCHOVER_SRC .............. COMPLETED ZDM_SWITCHOVER_TGT .............. COMPLETED ZDM_POST_DATABASE_OPEN_TGT ...... COMPLETED ZDM_NONCDBTOPDB_PRECHECK ........ COMPLETED ZDM_NONCDBTOPDB_CONVERSION ...... COMPLETED ZDM_POST_MIGRATE_TGT ............ COMPLETED TIMEZONE_UPGRADE_PREPARE_TGT .... COMPLETED TIMEZONE_UPGRADE_TGT ............ COMPLETED ZDM_POSTUSERACTIONS ............. COMPLETED ZDM_POSTUSERACTIONS_TGT ......... COMPLETED ZDM_CLEANUP_SRC ................. COMPLETED ZDM_CLEANUP_TGT ................. COMPLETED
Checks
If we check the new PDB, we can see that the new PDB is opened READ/WRITE with no restriction.
oracle@ExaCC-cl01n1:/u02/app/oracle/zdm/zdm_ONPR_RZ2_75/zdm/log/ [ONPR1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 17:11:34 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.22.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 ONPRZ_APP_001T READ WRITE NO
The application tablespaces are all encrypted. The SYSTEM, SYSAUX, UNDO and TEMP tablespace are not encrypted. This is expected. We will encrypt them manually.
We can check the pdb violations.
SQL> alter session set container=ONPRZ_APP_001T; Session altered. SQL> select status, message from pdb_plug_in_violations; STATUS MESSAGE --------- ------------------------------------------------------------------------------------------------------------------------ RESOLVED PDB needs to import keys from source. RESOLVED Database option RAC mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. RESOLVED PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run. PENDING Database option APS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option CATJAVA mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option JAVAVM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option XML mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option XOQ mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Tablespace SYSTEM is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted. PENDING Tablespace SYSAUX is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted. 15 rows selected.
The database option can be ignored. See “OPTION WARNING Database option mismatch: PDB installed version NULL” in PDB_PLUG_IN_VIOLATIONS (Doc ID 2020172.1). The encryption one will be resolved once the tablespace will be encrypted manually. There is various method to do this and will not be described here in details.
To check tablespace encryption:
SQL> select a.con_id, a.tablespace_name, nvl(b.ENCRYPTIONALG,'NOT ENCRYPTED') from cdb_tablespaces a, (select x.con_id, y.ENCRYPTIONALG, x.name from V$TABLESPACE x, V$ENCRYPTED_TABLESPACES y
where x.ts#=y.ts# and x.con_id=y.con_id) b where a.con_id=b.con_id(+) and a.tablespace_name=b.name(+) order by 1,2;
To encrypt the tablespace, we used following query, ran after having closed all PDB instances:
set serveroutput on;
DECLARE
CURSOR c_cur IS
SELECT name FROM V$TABLESPACE;
e_already_encrypted EXCEPTION;
pragma exception_init( e_already_encrypted, -28431);
e_tmp_cant_be_encrypted EXCEPTION;
pragma exception_init( e_tmp_cant_be_encrypted, -28370);
l_cmd VARCHAR2(200);
BEGIN
FOR r_rec IN c_cur LOOP
l_cmd := 'ALTER TABLESPACE '|| r_rec.name || ' ENCRYPTION OFFLINE ENCRYPT';
dbms_output.put_line('Command: '|| l_cmd );
BEGIN
EXECUTE IMMEDIATE l_cmd;
EXCEPTION
WHEN e_already_encrypted THEN NULL;
-- ORA-28431: cannot encrypt an already encrypted data file UNDOTBS1
WHEN e_tmp_cant_be_encrypted THEN
dbms_output.put_line('CAUTION ! needs further actions for '|| r_rec.name ||' as of ORA-28370: ENCRYPT, DECRYPT or REKEY option not allowed');
NULL;
-- ORA-28370: ENCRYPT, DECRYPT or REKEY option not allowed
END;
END LOOP;
END;
/
TEMP tablespace will have to be recreated. This is a normal DBA tasks. Once created again, TEMP tablespace will be also encrypted.
Relocate PDB
We can now relocate the migrated PDB to one of the CDB on the ExaCC side.
To do this, we first have to create a new master key for the PDB.
oracle@ExaCC-cl01n1:/u02/app/oracle/zdm/zdm_ONPR_RZ2_75/zdm/log/ [ONPR1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 22 17:11:34 2024 Version 19.22.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.22.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 ONPRZ_APP_001T READ WRITE NO SQL> alter session set container=ONPRZ_APP_001T; Session altered. SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY "*****************" WITH BACKUP USING 'pre-relocate-ONPRZ_APP_001T'; keystore altered.
And we can relocate the database using dbaascli command.
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] dbaascli pdb relocate --sourceDBConnectionString ExaCC-cl01-scan:1521/ONPR_RZ2.domain.com --pdbName ONPRZ_APP_001T --dbName XXXXX100T
with:
–sourceDBConnectionString = Easyconnect to current target database used for the migration and containing the PDB
–pdbName = the name of the PDB to relocate
–dbName = the final CDB DB_NAME
If we check the PBD violations, we can see that everything is now ok.
SQL> select status, message from pdb_plug_in_violations; STATUS MESSAGE ---------- ---------------------------------------------------------------------------------------------------- PENDING Database option APS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option CATJAVA mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option JAVAVM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option XML mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING Database option XOQ mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. 10 rows selected.
The database option violations can be ignored. See “OPTION WARNING Database option mismatch: PDB installed version NULL” in PDB_PLUG_IN_VIOLATIONS (Doc ID 2020172.1)
Fallback
In case the test before putting the PDB in production is not satisfying a GO-LIVE, we can revert on-premises database to PRIMARY ROLE.
oracle@vmonpr:~/ [ONPR] sqh SQL> recover managed standby database cancel; ORA-16136: Managed Standby Recovery not active SQL> alter database recover managed standby database finish; Database altered. SQL> select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- ONPR MOUNTED PHYSICAL STANDBY SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 2147479664 bytes Fixed Size 8941680 bytes Variable Size 1224736768 bytes Database Buffers 905969664 bytes Redo Buffers 7831552 bytes Database mounted. Database opened. SQL> select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- ONPR READ ONLY PHYSICAL STANDBY SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 2147479664 bytes Fixed Size 8941680 bytes Variable Size 1224736768 bytes Database Buffers 905969664 bytes Redo Buffers 7831552 bytes Database mounted. SQL> select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- ONPR MOUNTED PHYSICAL STANDBY SQL> alter database activate standby database; Database altered. SQL> alter database open; Database altered. SQL> select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- ONPR READ WRITE PRIMARY
To wrap up
This is a real success story. We have been able to successfully migrate, using ZDM Physical Online, an on-premises database to the new ExaCC. I would like to thank the ZDM product management and development team for their availability and great help to quickly solve ORA-28374 error which was blocking the migration.