By Clemens Bleile
Recently I upgraded an ODA X7-M from 19.12. to 19.13. After the dcs-, server- and storage-upgrade several databases on the machine had to be patched from 19.9. to 19.13. The
[root@<node> ~]# odacli create-prepatchreport --dbhome --dbhomeid <home-id> -v 19.13.0.0.0
went through without reporting an issue, but during the
[root@<node> ~]# odacli update-dhome -i <home-id> -v 19.13.0.0.0 -f
I got an error “DCS-10001:Internal error encountered: null.”:
[root@<node> ~]# odacli describe-job -i "<Job-Id>" Job details ---------------------------------------------------------------- ID: <Job-Id> Description: DB Home Patching: Home Id is <Home-Id> Status: Failure Created: February 13, 2022 1:28:41 PM CET Message: DCS-10001:Internal error encountered: null. Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- DB Home Patching February 13, 2022 1:29:02 PM CET February 13, 2022 1:33:28 PM CET Failure DB Home Patching February 13, 2022 1:29:02 PM CET February 13, 2022 1:33:28 PM CET Failure Adding USER SSH_EQUIVALENCE February 13, 2022 1:29:02 PM CET February 13, 2022 1:29:02 PM CET Success Adding USER SSH_EQUIVALENCE February 13, 2022 1:29:02 PM CET February 13, 2022 1:29:03 PM CET Success Adding USER SSH_EQUIVALENCE February 13, 2022 1:29:03 PM CET February 13, 2022 1:29:04 PM CET Success task:TaskSequential_3705 February 13, 2022 1:29:04 PM CET February 13, 2022 1:33:04 PM CET Failure Creating wallet for DB Client February 13, 2022 1:29:44 PM CET February 13, 2022 1:29:44 PM CET Success Patch databases by RHP February 13, 2022 1:29:44 PM CET February 13, 2022 1:31:28 PM CET Success updating database metadata February 13, 2022 1:32:22 PM CET February 13, 2022 1:32:22 PM CET Success Set log_archive_dest for Database February 13, 2022 1:32:22 PM CET February 13, 2022 1:32:25 PM CET Success Patch databases by RHP February 13, 2022 1:32:25 PM CET February 13, 2022 1:33:04 PM CET Failure [root@<node> ~]#
Checking the RHP logfile /opt/oracle/rhp/rhplog/rhpapi.log.0 showed the following:
[UID:<UID>] [Patch databases by RHP : JobId=<Job-Id>] [ 2022-02-13 13:33:04.671 CET ] [BatchMoveOpImpl.internalContinueMove:3348] Batch failed for at least one DB [UID:<UID>] [Patch databases by RHP : JobId=lt.Job-Id>] [ 2022-02-13 13:33:04.671 CET ] [DBPatchUpgradeOperationImpl.move:477] attempt to move or upgrade database failed with OperationException : PRCT-1003 : failed to run "rhphelper" on node "<node>" PRCT-1014 : Internal error: RHPHELP112_mergeLsnr-08 [UID:<UID>] [Patch databases by RHP : JobId=lt.Job-Id>] [ 2022-02-13 13:33:04.671 CET ] [DatabaseOperationImpl.move:1448] OperationException: PRCT-1003 : failed to run "rhphelper" on node "<node>" PRCT-1014 : Internal error: RHPHELP112_mergeLsnr-08 [UID:<UID>] [Patch databases by RHP : JobId=lt.Job-Id>] [ 2022-02-13 13:33:04.671 CET ] [GHOperationCommonImpl.moveDatabase:2978] OperationException: PRCT-1003 : failed to run "rhphelper" on node "<node>" PRCT-1014 : Internal error: RHPHELP112_mergeLsnr-08 [UID:<UID>] [Patch databases by RHP : JobId=lt.Job-Id>] [ 2022-02-13 13:33:04.671 CET ] [FPPMBeanImpl.doOp:372] InvocationTargetException caught [UID:<UID>] [Patch databases by RHP : JobId=lt.Job-Id>] [ 2022-02-13 13:33:04.672 CET ] [FPPMBeanImpl.doOp:382] Exception: java.lang.NullPointerException: null
So basically the errors were
PRCT-1003 : failed to run "rhphelper" on node "<node>" PRCT-1014 : Internal error: RHPHELP112_mergeLsnr-08
The analysis revealed the root cause for the issue to be Bug 32833813. This is described in My Oracle Support Note 32833813.8 for an Upgrade from 11.2. to 19.2., but it obviously may also happen when moving to a new ORACLE_HOME for an Release Update with RHP:
Upgrading 11.2 SI Database to 19c Failed With Error PRCT-1003 : failed to run “rhphelper” on node “<HOSTNAME>” PRCT-1014 : internal error: rhphelp12102_main-02
The bug is fixed in the 19.14.0.0.220118 (JAN 2022) OCW Release Update. According the MOS Note there is no workaround.
The workaround for me was to change the ORACLE_HOME in the cluster registry to 19.13. for all DBs and finally update the ODA registry. I.e. in my case with the 19.13.-ORACLE_HOME being /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_3:
[oracle@<node> ~]$ srvctl stop database -db <DBNAME> [oracle@<node> ~]$ srvctl modify database -db <DBNAME> -oraclehome /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_3 [oracle@<node> ~]$ srvctl config database -db <DBNAME> [oracle@<node> ~]$ srvctl start database -db <DBNAME> [oracle@<node> ~]$ . oraenv ORACLE_SID = [oracle] ? <ORACLE_SID> [oracle@<node> ~]$ cd $ORACLE_HOME/OPatch [oracle@<node> ~]$ ./datapatch -verbose
REMARKs:
– After modifying the ORACLE_HOME with srvctl the agent automatically updates the /etc/oratab.
– On a standby-DB it’s of course not necessary to run datapatch.
To automate this I wrote a bash-script, so that I can start this against all my databases on this server running in the 19.9.-ORACLE_HOME.
The last step is to update the ODA registry to reflect the change of the databases ORACLE_HOME:
[root@<node> ~]# odacli update-registry -n db -f