ODA 19.9 has just been released for Bare Metal yesterday, and I had the opportunity to already patch a customer production ODA to this latest version. Through this blog I wanted to share my experience on patching an ODA to 19.9 as well as a new tricky skip-orachk option.
Patching requirement
To patch the Bare Metal ODA to 19.9 version (patch 31922078), we need to be in either 19.5, 19.6, 19.7 or 19.8 version. This is described in the ODA documentation.
First of all we need to ensure we have enough space on /, /u01 and /opt file systems. At least 20 GB should be available. If not, we can do some cleaning or extend the LVM partitions.
[root@ODA01 /]# df -h / /u01 /opt Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroupSys-LogVolRoot 30G 9.5G 19G 34% / /dev/mapper/VolGroupSys-LogVolU01 99G 55G 40G 59% /u01 /dev/mapper/VolGroupSys-LogVolOpt 75G 43G 29G 60% /opt
Then we will check that no hardware failure is existing on the ODA. This can be checked with the ILOM GUI or using a ssh connection on the ILOM :
-> show /SP/faultmgmt /SP/faultmgmt Targets: shell Properties: Commands: cd show -> start /SP/faultmgmt/shell Are you sure you want to start /SP/faultmgmt/shell (y/n)? y faultmgmtsp> fmadm faulty No faults found
Recommendation is to use odabr tool and perform a snapshot backup :
[root@ODA01 /]# /opt/odabr/odabr backup -snap INFO: 2020-11-04 16:30:42: Please check the logfile '/opt/odabr/out/log/odabr_37159.log' for more details -------------------------------------------------------- odabr - ODA node Backup Restore Author: Ruggero Citton RAC Pack, Cloud Innovation and Solution Engineering Team Copyright Oracle, Inc. 2013, 2019 Version: 2.0.1-47 -------------------------------------------------------- INFO: 2020-11-04 16:30:42: Checking superuser INFO: 2020-11-04 16:30:42: Checking Bare Metal INFO: 2020-11-04 16:30:42: Removing existing LVM snapshots WARNING: 2020-11-04 16:30:42: LVM snapshot for 'opt' does not exist WARNING: 2020-11-04 16:30:42: LVM snapshot for 'u01' does not exist WARNING: 2020-11-04 16:30:42: LVM snapshot for 'root' does not exist INFO: 2020-11-04 16:30:42: Checking LVM size INFO: 2020-11-04 16:30:42: Doing a snapshot backup only INFO: 2020-11-04 16:30:42: Boot device backup INFO: 2020-11-04 16:30:42: ...getting boot device INFO: 2020-11-04 16:30:42: ...making boot device backup INFO: 2020-11-04 16:30:44: ...boot device backup saved as '/opt/odabr/out/hbi/boot.img' INFO: 2020-11-04 16:30:44: Getting EFI device INFO: 2020-11-04 16:30:44: ...making efi device backup INFO: 2020-11-04 16:30:46: EFI device backup saved as '/opt/odabr/out/hbi/efi.img' INFO: 2020-11-04 16:30:46: OCR backup INFO: 2020-11-04 16:30:47: ...ocr backup saved as '/opt/odabr/out/hbi/ocrbackup_37159.bck' INFO: 2020-11-04 16:30:47: Making LVM snapshot backup SUCCESS: 2020-11-04 16:30:49: ...snapshot backup for 'opt' created successfully SUCCESS: 2020-11-04 16:30:49: ...snapshot backup for 'u01' created successfully SUCCESS: 2020-11-04 16:30:49: ...snapshot backup for 'root' created successfully SUCCESS: 2020-11-04 16:30:49: LVM snapshots backup done successfully [root@ODA01 /]# /opt/odabr/odabr infosnap -------------------------------------------------------- odabr - ODA node Backup Restore Author: Ruggero Citton RAC Pack, Cloud Innovation and Solution Engineering Team Copyright Oracle, Inc. 2013, 2019 Version: 2.0.1-47 -------------------------------------------------------- LVM snap name Status COW Size Data% ------------- ---------- ---------- ------ root_snap active 30.00 GiB 0.01% opt_snap active 60.00 GiB 0.01% u01_snap active 100.00 GiB 0.01%
We can as well run orachk excluding the rdbms checks :
[root@ODA01 /]# cd /opt/oracle/dcs/oracle.ahf/orachk [root@ODA01 orachk]# ./orachk -nordbms . . . . . . Either Cluster Verification Utility pack (cvupack) does not exist at /opt/oracle/dcs/oracle.ahf/common/cvu or it is an old or invalid cvupack Checking Cluster Verification Utility (CVU) version at CRS Home - /u01/app/19.0.0.0/grid This version of Cluster Verification Utility (CVU) was released on 10-Mar-2020 and it is older than 180 days. It is highly recommended that you download the latest version of CVU from MOS patch 30166242 to ensure the highest level of accuracy of the data contained within the report Do you want to download latest version of Cluster Verification Utility (CVU) from my oracle support? [y/n] [y] n Running older version of Cluster Verification Utility (CVU) from CRS Home - /u01/app/19.0.0.0/grid Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS on oda01 . . . . . . . . . . ------------------------------------------------------------------------------------------------------- Oracle Stack Status ------------------------------------------------------------------------------------------------------- Host Name CRS Installed RDBMS Installed CRS UP ASM UP RDBMS UP DB Instance Name ------------------------------------------------------------------------------------------------------- oda01 Yes No Yes Yes No ------------------------------------------------------------------------------------------------------- . . . . . . . . . . . . *** Checking Best Practice Recommendations ( Pass / Warning / Fail ) *** Collections and audit checks log file is /opt/oracle/dcs/oracle.ahf/data/oda01/orachk/orachk_oda01_110420_163217/log/orachk.log ============================================================ Node name - oda01 ============================================================ Collecting - ASM Disk Group for Infrastructure Software and Configuration Collecting - ASM Diskgroup Attributes Collecting - ASM initialization parameters Collecting - Disk I/O Scheduler on Linux Collecting - Interconnect network card speed Collecting - Kernel parameters Collecting - Maximum number of semaphore sets on system Collecting - Maximum number of semaphores on system Collecting - Maximum number of semaphores per semaphore set Collecting - OS Packages Collecting - Patches for Grid Infrastructure Collecting - number of semaphore operations per semop system call Collecting - CRS user limits configuration Collecting - Database Server Infrastructure Software and Configuration Collecting - umask setting for GI owner Data collections completed. Checking best practices on oda01. ------------------------------------------------------------ INFO => Oracle Database Appliance Best Practice References INFO => Oracle Data Pump Best practices. INFO => Important Storage Minimum Requirements for Grid & Database Homes WARNING => soft or hard memlock are not configured according to recommendation INFO => CSS disktimeout is not set to the default value WARNING => OCR is not being backed up daily INFO => CSS misscount is not set to the default value of 30 INFO => Jumbo frames (MTU >= 9000) are not configured for interconnect INFO => Information about hanganalyze and systemstate dump WARNING => One or more diskgroups from v$asm_diskgroups are not registered in clusterware registry Best Practice checking completed. Checking recommended patches on oda01 -------------------------------------------------------------------------------- Collecting patch inventory on CRS_HOME /u01/app/19.0.0.0/grid Collecting patch inventory on ASM_HOME /u01/app/19.0.0.0/grid ------------------------------------------------------------ CLUSTERWIDE CHECKS ------------------------------------------------------------ ------------------------------------------------------------ Detailed report (html) - /opt/oracle/dcs/oracle.ahf/data/oda01/orachk/orachk_oda01_110420_163217/orachk_oda01_110420_163217.html UPLOAD [if required] - /opt/oracle/dcs/oracle.ahf/data/oda01/orachk/orachk_oda01_110420_163217.zip
Then we need to ensure to have a good backup for the opened databases that will run on the ODA. If we are patching an ODA having High Availability (Data Guard for EE edition or dbvisit for SE edition), we will ensure to have run switchover and have only standby databases running on the ODA. And we need to stop the databases’ synchronization in that case.
Patching the ODA to 19.9
Once this requirements are met, we can start the patching.
We first need to unzip the downloaded patch files. The patch 31922078 files will be downloaded from oracle web support portal.
[root@ODA01 orachk]# cd /u01/app/patch/ [root@ODA01 patch]# ls -ltrh total 16G -rw-r--r-- 1 root root 6.7G Nov 4 14:11 p31922078_199000_Linux-x86-64_2of2.zip -rw-r--r-- 1 root root 9.2G Nov 4 15:17 p31922078_199000_Linux-x86-64_1of2.zip [root@ODA01 patch]# unzip p31922078_199000_Linux-x86-64_1of2.zip Archive: p31922078_199000_Linux-x86-64_1of2.zip extracting: oda-sm-19.9.0.0.0-201023-server1of2.zip inflating: README.txt [root@ODA01 patch]# unzip p31922078_199000_Linux-x86-64_2of2.zip Archive: p31922078_199000_Linux-x86-64_2of2.zip extracting: oda-sm-19.9.0.0.0-201023-server2of2.zip [root@ODA01 patch]# ls -ltrh total 32G -rw-r--r-- 1 root root 9.2G Oct 29 04:51 oda-sm-19.9.0.0.0-201023-server1of2.zip -rw-r--r-- 1 root root 6.7G Oct 29 04:53 oda-sm-19.9.0.0.0-201023-server2of2.zip -rw-r--r-- 1 root root 190 Oct 29 06:17 README.txt -rw-r--r-- 1 root root 6.7G Nov 4 14:11 p31922078_199000_Linux-x86-64_2of2.zip -rw-r--r-- 1 root root 9.2G Nov 4 15:17 p31922078_199000_Linux-x86-64_1of2.zip [root@ODA01 patch]# rm -f p31922078_199000_Linux-x86-64_2of2.zip [root@ODA01 patch]# rm -f p31922078_199000_Linux-x86-64_1of2.zip
We can then update the ODA repository with the patch files :
[root@ODA01 patch]# odacli update-repository -f /u01/app/patch/oda-sm-19.9.0.0.0-201023-server1of2.zip { "jobId" : "0c23cb4e-2455-4ad2-832b-168edce2f40c", "status" : "Created", "message" : "/u01/app/patch/oda-sm-19.9.0.0.0-201023-server1of2.zip", "reports" : [ ], "createTimestamp" : "November 04, 2020 16:55:52 PM CET", "resourceList" : [ ], "description" : "Repository Update", "updatedTime" : "November 04, 2020 16:55:52 PM CET" } [root@ODA01 patch]# odacli describe-job -i "0c23cb4e-2455-4ad2-832b-168edce2f40c" Job details ---------------------------------------------------------------- ID: 0c23cb4e-2455-4ad2-832b-168edce2f40c Description: Repository Update Status: Success Created: November 4, 2020 4:55:52 PM CET Message: /u01/app/patch/oda-sm-19.9.0.0.0-201023-server1of2.zip Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- [root@ODA01 patch]# odacli update-repository -f /u01/app/patch/oda-sm-19.9.0.0.0-201023-server2of2.zip { "jobId" : "04ecd45d-6b92-475c-acd9-202f0137474f", "status" : "Created", "message" : "/u01/app/patch/oda-sm-19.9.0.0.0-201023-server2of2.zip", "reports" : [ ], "createTimestamp" : "November 04, 2020 16:58:05 PM CET", "resourceList" : [ ], "description" : "Repository Update", "updatedTime" : "November 04, 2020 16:58:05 PM CET" } [root@ODA01 patch]# odacli describe-job -i "04ecd45d-6b92-475c-acd9-202f0137474f" Job details ---------------------------------------------------------------- ID: 04ecd45d-6b92-475c-acd9-202f0137474f Description: Repository Update Status: Success Created: November 4, 2020 4:58:05 PM CET Message: /u01/app/patch/oda-sm-19.9.0.0.0-201023-server2of2.zip Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- [root@ODA01 patch]# odacli list-jobs | head -n 3; odacli list-jobs | tail -n 3 ID Description Created Status ---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ---------- 0c23cb4e-2455-4ad2-832b-168edce2f40c Repository Update November 4, 2020 4:55:52 PM CET Success 04ecd45d-6b92-475c-acd9-202f0137474f Repository Update November 4, 2020 4:58:05 PM CET Success
We can already clean up the patch folder as the files are not needed any more :
[root@ODA01 patch]# ls -ltrh total 16G -rw-r--r-- 1 root root 9.2G Oct 29 04:51 oda-sm-19.9.0.0.0-201023-server1of2.zip -rw-r--r-- 1 root root 6.7G Oct 29 04:53 oda-sm-19.9.0.0.0-201023-server2of2.zip -rw-r--r-- 1 root root 190 Oct 29 06:17 README.txt [root@ODA01 patch]# rm -f *.zip [root@ODA01 patch]# rm -f README.txt
We will check the current version and available new version :
[root@ODA01 patch]# odacli describe-component System Version --------------- 19.6.0.0.0 Component Installed Version Available Version ---------------------------------------- -------------------- -------------------- OAK 19.6.0.0.0 19.9.0.0.0 GI 19.6.0.0.200114 19.9.0.0.201020 DB 18.7.0.0.190716 18.12.0.0.201020 DCSAGENT 19.6.0.0.0 19.9.0.0.0 ILOM 4.0.4.51.r133528 5.0.1.21.r136383 BIOS 52021000 52030400 OS 7.7 7.8 FIRMWARECONTROLLER VDV1RL02 VDV1RL04 FIRMWAREDISK 1102 1132 HMP 2.4.5.0.1 2.4.7.0.1
I’m usually stopping the databases at that time. It is not mandatory, but I personally prefer. This can be achieved by stopping each database with srvctl stop database command or srvctl stop home command to stop all databases from same rdbms home.
Now we can update the dcs-agent :
[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.9.0.0.0 { "jobId" : "fa6c5e53-b0b7-470e-b856-ccf19a0305ef", "status" : "Created", "message" : "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands", "reports" : [ ], "createTimestamp" : "November 04, 2020 17:02:53 PM CET", "resourceList" : [ ], "description" : "DcsAgent patching", "updatedTime" : "November 04, 2020 17:02:53 PM CET" } [root@ODA01 patch]# odacli describe-job -i "fa6c5e53-b0b7-470e-b856-ccf19a0305ef" Job details ---------------------------------------------------------------- ID: fa6c5e53-b0b7-470e-b856-ccf19a0305ef Description: DcsAgent patching Status: Success Created: November 4, 2020 5:02:53 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- dcs-agent upgrade to version 19.9.0.0.0 November 4, 2020 5:02:53 PM CET November 4, 2020 5:04:28 PM CET Success Update System version November 4, 2020 5:04:28 PM CET November 4, 2020 5:04:28 PM CET Success
We will now update the DCS admin :
[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.9.0.0.0 { "jobId" : "bdcbda55-d325-44ca-8bed-f0b15eeacfae", "status" : "Created", "message" : null, "reports" : [ ], "createTimestamp" : "November 04, 2020 17:04:57 PM CET", "resourceList" : [ ], "description" : "DcsAdmin patching", "updatedTime" : "November 04, 2020 17:04:57 PM CET" } [root@ODA01 patch]# odacli describe-job -i "bdcbda55-d325-44ca-8bed-f0b15eeacfae" Job details ---------------------------------------------------------------- ID: bdcbda55-d325-44ca-8bed-f0b15eeacfae Description: DcsAdmin patching Status: Success Created: November 4, 2020 5:04:57 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Patch location validation November 4, 2020 5:04:58 PM CET November 4, 2020 5:04:58 PM CET Success dcs-admin upgrade November 4, 2020 5:04:58 PM CET November 4, 2020 5:05:04 PM CET Success
We will update the DCS components :
[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.9.0.0.0 { "jobId" : "4782c035-86fd-496b-b9f1-1055d77071b3", "status" : "Success", "message" : null, "reports" : null, "createTimestamp" : "November 04, 2020 17:05:48 PM CET", "description" : "Job completed and is not part of Agent job list", "updatedTime" : "November 04, 2020 17:05:48 PM CET" }
We will run the prepatch report :
[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.9.0.0.0 Job details ---------------------------------------------------------------- ID: d836f326-aba3-44e6-9be4-aaa031b5d730 Description: Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER] Status: Created Created: November 4, 2020 5:07:37 PM CET Message: Use 'odacli describe-prepatchreport -i d836f326-aba3-44e6-9be4-aaa031b5d730' to check details of results Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ----------
And we will check the report :
[root@ODA01 patch]# odacli describe-prepatchreport -i d836f326-aba3-44e6-9be4-aaa031b5d730 Patch pre-check report ------------------------------------------------------------------------ Job ID: d836f326-aba3-44e6-9be4-aaa031b5d730 Description: Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER] Status: FAILED Created: November 4, 2020 5:07:37 PM CET Result: One or more pre-checks failed for [ORACHK] Node Name --------------- ODA01 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __OS__ Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.9.0.0.0. Is patch location available Success Patch location is available. Verify OS patch Success Verified OS patch Validate command execution Success Validated command execution __ILOM__ Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.9.0.0.0. Is patch location available Success Patch location is available. Checking Ilom patch Version Success Successfully verified the versions Patch location validation Success Successfully validated location Validate command execution Success Validated command execution __GI__ Validate supported GI versions Success Validated minimum supported versions. Validate available space Success Validated free space under /u01 Is clusterware running Success Clusterware is running Validate patching tag Success Validated patching tag: 19.9.0.0.0. Is system provisioned Success Verified system is provisioned Validate ASM in online Success ASM is online Validate minimum agent version Success GI patching enabled in current DCSAGENT version Validate GI patch metadata Success Validated patching tag: 19.9.0.0.0. Validate clones location exist Success Validated clones location Is patch location available Success Patch location is available. Patch location validation Success Successfully validated location Patch verification Success Patches 31771877 not applied on GI home /u01/app/19.0.0.0/grid on node ODA01 Validate Opatch update Success Successfully updated the opatch in GiHome /u01/app/19.0.0.0/grid on node ODA01 Patch conflict check Success No patch conflicts found on GiHome /u01/app/19.0.0.0/grid on node ODA01 Validate command execution Success Validated command execution __ORACHK__ Running orachk Failed Orachk validation failed: . Validate command execution Success Validated command execution Software home Failed Software home check failed
The prepatch report has been failing on orachk and the software home part. In the html report from orachk I could check and see that the software home check is failing due to missing files :
FAIL => Software home check failed Error Message: File "/u01/app/19.0.0.0/grid/jdk/jre/lib/amd64/libjavafx_font_t2k.so" could not be verified on node "oda01". OS error: "No such file or directory" Error Message: File "/u01/app/19.0.0.0/grid/jdk/jre/lib/amd64/libkcms.so" could not be verified on node "oda01". OS error: "No such file or directory" Error Message: File "/u01/app/19.0.0.0/grid/rdbms/lib/ksms.o" could not be verified on node "oda01". OS error: "No such file or directory"
This files are expected during orachk check as referenced in the XML files :
[root@ODA01 ~]# grep ksms /u01/app/19.0.0.0/grid/cv/cvdata/ora_software_cfg.xml <File Path="rdbms/lib/" Name="ksms.o" Permissions="644"/> <File Path="bin/" Name="genksms" Permissions="755"/> <File Path="rdbms/lib/" Name="genksms.o"/> <File Path="rdbms/lib/" Name="ksms.o" Permissions="644"/> <File Path="bin/" Name="genksms" Permissions="755"/> <File Path="rdbms/lib/" Name="genksms.o"/> <File Path="rdbms/lib/" Name="ksms.o" Permissions="644"/> <File Path="bin/" Name="genksms" Permissions="755"/> <File Path="rdbms/lib/" Name="genksms.o"/> [root@ODA01 ~]# grep ksms /u01/app/19.0.0.0/grid/cv/cvdata/19/ora_software_cfg.xml <File Path="rdbms/lib/" Name="ksms.o" Permissions="644"/> <File Path="rdbms/lib/" Name="genksms.o"/> <File Path="bin/" Name="genksms" Permissions="755"/> <File Path="rdbms/lib/" Name="ksms.o" Permissions="644"/> <File Path="rdbms/lib/" Name="genksms.o"/> <File Path="bin/" Name="genksms" Permissions="755"/> <File Path="rdbms/lib/" Name="ksms.o" Permissions="644"/> <File Path="rdbms/lib/" Name="genksms.o"/> <File Path="bin/" Name="genksms" Permissions="755"/>
I find following MOS note that can be related to same problem : File “$GRID_HOME/rdbms/lib/ksms.o” could not be verified on node (Doc ID 1908505.1).
As per this note we can ignore this erreur and move forward. I then decided to move forward with the server patching.
I patched the server :
[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli update-server -v 19.9.0.0.0 { "jobId" : "78f3ea84-4e31-4e1f-b195-eb4e75429102", "status" : "Created", "message" : "Success of server update will trigger reboot of the node after 4-5 minutes. Please wait until the node reboots.", "reports" : [ ], "createTimestamp" : "November 04, 2020 17:17:57 PM CET", "resourceList" : [ ], "description" : "Server Patching", "updatedTime" : "November 04, 2020 17:17:57 PM CET" }
But the patching failed immediately as orachk was not successful due to the problem just described before :
[root@ODA01 patch]# odacli describe-job -i "78f3ea84-4e31-4e1f-b195-eb4e75429102" Job details ---------------------------------------------------------------- ID: 78f3ea84-4e31-4e1f-b195-eb4e75429102 Description: Server Patching Status: Failure Created: November 4, 2020 5:17:57 PM CET Message: DCS-10702:Orachk validation failed: Please run describe-prepatchreport 78f3ea84-4e31-4e1f-b195-eb4e75429102 to see details. Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Server patching November 4, 2020 5:18:05 PM CET November 4, 2020 5:22:05 PM CET Failure Orachk Server Patching November 4, 2020 5:18:05 PM CET November 4, 2020 5:22:05 PM CET Failure
So starting 19.9 it seems that orachk is mandatory before doing any patching, and if orachk will not be successful the patching will then fail.
By chance there is a new skip-orachk option to skip the orachk during server patching :
[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli update-server -v 19.9.0.0.0 -h Usage: update-server [options] Options: --component, -c The component that is requested for update. The supported components include: OS --force, -f Ignore precheck error and force patching --help, -h get help --json, -j json output --local, -l Update Server Components Locally --node, -n Node to be updated --precheck, -p Obsolete flag --skip-orachk, -sko Option to skip orachk validations --version, -v Version to be updated
I then could successfully patch the server :
[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli update-server -v 19.9.0.0.0 -sko { "jobId" : "878fac12-a2a0-4302-955c-7df3d4fdd517", "status" : "Created", "message" : "Success of server update will trigger reboot of the node after 4-5 minutes. Please wait until the node reboots.", "reports" : [ ], "createTimestamp" : "November 04, 2020 18:03:15 PM CET", "resourceList" : [ ], "description" : "Server Patching", "updatedTime" : "November 04, 2020 18:03:15 PM CET" }
[root@ODA01 ~]# uptime 19:06:00 up 2 min, 1 user, load average: 2.58, 1.32, 0.52 [root@ODA01 ~]# odacli describe-job -i "878fac12-a2a0-4302-955c-7df3d4fdd517" Job details ---------------------------------------------------------------- ID: 878fac12-a2a0-4302-955c-7df3d4fdd517 Description: Server Patching Status: Success Created: November 4, 2020 6:03:15 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Patch location validation November 4, 2020 6:03:23 PM CET November 4, 2020 6:03:23 PM CET Success dcs-controller upgrade November 4, 2020 6:03:23 PM CET November 4, 2020 6:03:28 PM CET Success Patch location validation November 4, 2020 6:03:30 PM CET November 4, 2020 6:03:30 PM CET Success dcs-cli upgrade November 4, 2020 6:03:30 PM CET November 4, 2020 6:03:30 PM CET Success Creating repositories using yum November 4, 2020 6:03:30 PM CET November 4, 2020 6:03:33 PM CET Success Updating YumPluginVersionLock rpm November 4, 2020 6:03:33 PM CET November 4, 2020 6:03:33 PM CET Success Applying OS Patches November 4, 2020 6:03:33 PM CET November 4, 2020 6:13:18 PM CET Success Creating repositories using yum November 4, 2020 6:13:18 PM CET November 4, 2020 6:13:18 PM CET Success Applying HMP Patches November 4, 2020 6:13:18 PM CET November 4, 2020 6:13:38 PM CET Success Client root Set up November 4, 2020 6:13:38 PM CET November 4, 2020 6:13:41 PM CET Success Client grid Set up November 4, 2020 6:13:41 PM CET November 4, 2020 6:13:46 PM CET Success Patch location validation November 4, 2020 6:13:46 PM CET November 4, 2020 6:13:46 PM CET Success oda-hw-mgmt upgrade November 4, 2020 6:13:46 PM CET November 4, 2020 6:14:17 PM CET Success OSS Patching November 4, 2020 6:14:17 PM CET November 4, 2020 6:14:18 PM CET Success Applying Firmware Disk Patches November 4, 2020 6:14:18 PM CET November 4, 2020 6:14:21 PM CET Success Applying Firmware Controller Patches November 4, 2020 6:14:21 PM CET November 4, 2020 6:14:24 PM CET Success Checking Ilom patch Version November 4, 2020 6:14:25 PM CET November 4, 2020 6:14:27 PM CET Success Patch location validation November 4, 2020 6:14:27 PM CET November 4, 2020 6:14:28 PM CET Success Save password in Wallet November 4, 2020 6:14:29 PM CET November 4, 2020 6:14:30 PM CET Success Apply Ilom patch November 4, 2020 6:14:30 PM CET November 4, 2020 6:22:34 PM CET Success Copying Flash Bios to Temp location November 4, 2020 6:22:34 PM CET November 4, 2020 6:22:34 PM CET Success Starting the clusterware November 4, 2020 6:22:35 PM CET November 4, 2020 6:23:58 PM CET Success clusterware patch verification November 4, 2020 6:23:58 PM CET November 4, 2020 6:24:01 PM CET Success Patch location validation November 4, 2020 6:24:01 PM CET November 4, 2020 6:24:01 PM CET Success Opatch update November 4, 2020 6:24:43 PM CET November 4, 2020 6:24:46 PM CET Success Patch conflict check November 4, 2020 6:24:46 PM CET November 4, 2020 6:25:31 PM CET Success clusterware upgrade November 4, 2020 6:25:52 PM CET November 4, 2020 6:50:57 PM CET Success Updating GiHome version November 4, 2020 6:50:57 PM CET November 4, 2020 6:51:12 PM CET Success Update System version November 4, 2020 6:51:16 PM CET November 4, 2020 6:51:16 PM CET Success Cleanup JRE Home November 4, 2020 6:51:16 PM CET November 4, 2020 6:51:16 PM CET Success preRebootNode Actions November 4, 2020 6:51:16 PM CET November 4, 2020 6:51:57 PM CET Success Reboot Ilom November 4, 2020 6:51:57 PM CET November 4, 2020 6:51:57 PM CET Success
I could check the new current installed version :
[root@ODA01 ~]# odacli describe-component System Version --------------- 19.9.0.0.0 Component Installed Version Available Version ---------------------------------------- -------------------- -------------------- OAK 19.9.0.0.0 up-to-date GI 19.9.0.0.201020 up-to-date DB 18.7.0.0.190716 18.12.0.0.201020 DCSAGENT 19.9.0.0.0 up-to-date ILOM 5.0.1.21.r136383 up-to-date BIOS 52030400 up-to-date OS 7.8 up-to-date FIRMWARECONTROLLER VDV1RL02 VDV1RL04 FIRMWAREDISK 1102 1132 HMP 2.4.7.0.1 up-to-date
I patched the storage :
[root@ODA01 ~]# odacli update-storage -v 19.9.0.0.0 { "jobId" : "61871e3d-088b-43af-8b91-94dc4fa1331a", "status" : "Created", "message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart", "reports" : [ ], "createTimestamp" : "November 04, 2020 19:07:17 PM CET", "resourceList" : [ ], "description" : "Storage Firmware Patching", "updatedTime" : "November 04, 2020 19:07:17 PM CET" } [root@ODA01 ~]# odacli describe-job -i "61871e3d-088b-43af-8b91-94dc4fa1331a" Job details ---------------------------------------------------------------- ID: 61871e3d-088b-43af-8b91-94dc4fa1331a Description: Storage Firmware Patching Status: Success Created: November 4, 2020 7:07:17 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Applying Firmware Disk Patches November 4, 2020 7:07:20 PM CET November 4, 2020 7:07:21 PM CET Success preRebootNode Actions November 4, 2020 7:07:21 PM CET November 4, 2020 7:07:21 PM CET Success Reboot Ilom November 4, 2020 7:07:21 PM CET November 4, 2020 7:07:21 PM CET Success
Surprisingly the storage patching was done immediately and with no reboot.
I checked the version and could see that effectively the storage was still running old firmware versions :
[root@ODA01 ~]# odacli describe-component System Version --------------- 19.9.0.0.0 Component Installed Version Available Version ---------------------------------------- -------------------- -------------------- OAK 19.9.0.0.0 up-to-date GI 19.9.0.0.201020 up-to-date DB 18.7.0.0.190716 18.12.0.0.201020 DCSAGENT 19.9.0.0.0 up-to-date ILOM 5.0.1.21.r136383 up-to-date BIOS 52030400 up-to-date OS 7.8 up-to-date FIRMWARECONTROLLER VDV1RL02 VDV1RL04 FIRMWAREDISK 1102 1132 HMP 2.4.7.0.1 up-to-date
I opened a SR and could get confirmation from support that this is a bug. It will not impact any functionnality.
The BUG is the following one :
Bug 32017186 – LNX64-199-CMT : FIRMWARECONTROLLER NOT PATCHED FOR 19.9
—————————- UPDATE FROM 12.11.2020 —————————-
Oracle support gave me the WA to solve firmware disk and controller patching issue.
Firmware disk version 1132 is not yet available, thus the xml metadata file should include version 1102.
We will then backup the current /opt/oracle/oak/pkgrepos/System/latest/patchmetadata.xml file :
[root@ODA01 ~]# cp -p /opt/oracle/oak/pkgrepos/System/latest/patchmetadata.xml /opt/oracle/oak/pkgrepos/System/latest/patchmetadata.xml.orig.20201109
And we will transfer the new file provided by oracle support (19.9.patchmetadata.xml) :
[root@ODA01 ~]# cd /opt/oracle/oak/pkgrepos/System/latest [root@ODA01 latest]# ls -ltrh *patchmetadata.xml -rwx------ 1 root root 39K Oct 29 05:10 patchmetadata.xml -rw-r--r-- 1 root root 39K Nov 9 16:02 19.9.patchmetadata.xml [root@ODA01 latest]# mv 19.9.patchmetadata.xml patchmetadata.xml mv: overwrite ‘patchmetadata.xml’? y [root@ODA01 latest]# diff patchmetadata.xml patchmetadata.xml.orig.20201109 1019c1019 < 1102,1132 --- > 1132 [root@ODA01 latest]#
The odacli describe-component command is now showing the correct output for firmwaredisk :
[root@ODA01 latest]# odacli describe-component System Version --------------- 19.9.0.0.0 Component Installed Version Available Version ---------------------------------------- -------------------- -------------------- OAK 19.9.0.0.0 up-to-date GI 19.9.0.0.201020 up-to-date DB 18.7.0.0.190716 18.12.0.0.201020 DCSAGENT 19.9.0.0.0 up-to-date ILOM 5.0.1.21.r136383 up-to-date BIOS 52030400 up-to-date OS 7.8 up-to-date FIRMWARECONTROLLER VDV1RL02 VDV1RL04 FIRMWAREDISK 1102 up-to-date HMP 2.4.7.0.1 up-to-date
As per the firmwarecontroller we will patch it manually as following. There is no impact and downtime. No reboot is needed.
The firmware to be patched manually is the VDV1RL04 version and stored in :
[root@ODA01 log]# cd /opt/oracle/oak/pkgrepos/firmwarecontroller/intel/0x0a54/vdv1rl04/7361456_icrpc2dd2ora6.4t [root@ODA01 7361456_icrpc2dd2ora6.4t]# ls componentmetadata.xml ICRPC2DD2.RL04.fw metadata.xml
We will check the current NVMe Firmware controller disk version. It should be VDV1RL02 for all NVMe disks :
[root@ODA01 7361456_icrpc2dd2ora6.4t]# fwupdate list controller ================================================== CONTROLLER ================================================== ID Type Manufacturer Model Product Name FW Version BIOS Version EFI Version FCODE Version Package Version NVDATA Version XML Support -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- c0 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL02 - - - - - N/A c1 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL02 - - - - - N/A c2 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL02 - - - - - N/A c3 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL02 - - - - - N/A c4 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL02 - - - - - N/A c5 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL02 - - - - - N/A c6 HDC Intel 0x2826 0x486c - - - - - - N/A c7 NET Intel 0x1533 Intel(R) I210 Gigabit Net - - 80000681 - N/A
We will patch them manually. Following command needs to be run on all NVME disks ID, in my case c0, c1, c2, c3, c4 and c5. Example of command is given for ID c1 :
[root@ODA01 7361456_icrpc2dd2ora6.4t]# fwupdate update controller -n c1 -x metadata.xml The following actions will be taken: ========================================================== ID Priority Action Status Old Firmware Ver. Proposed Ver. New Firmware Ver. System Reboot ------------------------------------------------------------------------------------------------------------------------ c1 1 Check FW Success VDV1RL02 VDV1RL04 N/A None Do you wish to process the above actions? [y/n]? y Updating c1: Success Sleeping for 10 seconds for component to recover Resetting c1 Mandatory post reset 60 second sleep Verifying all priority 1 updates Execution Summary ========================================================== ID Priority Action Status Old Firmware Ver. Proposed Ver. New Firmware Ver. System Reboot ------------------------------------------------------------------------------------------------------------------------ c1 1 Validate Success VDV1RL02 VDV1RL04 VDV1RL04 None
Now all NVMe disks are showing a controller version of VDV1RL04 :
[root@ODA01 7361456_icrpc2dd2ora6.4t]# fwupdate list controller ================================================== CONTROLLER ================================================== ID Type Manufacturer Model Product Name FW Version BIOS Version EFI Version FCODE Version Package Version NVDATA Version XML Support -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- c0 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL04 - - - - - N/A c1 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL04 - - - - - N/A c2 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL04 - - - - - N/A c3 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL04 - - - - - N/A c4 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL04 - - - - - N/A c5 NVMe Intel 0x0a54 7361456_ICRPC2DD2ORA6.4T VDV1RL04 - - - - - N/A c6 HDC Intel 0x2826 0x486c - - - - - - N/A c7 NET Intel 0x1533 Intel(R) I210 Gigabit Net - - 80000681 - N/A
The command odacli describe-component is now showing firmware controller to be updated :
[root@ODA01 7361456_icrpc2dd2ora6.4t]# odacli describe-component System Version --------------- 19.9.0.0.0 Component Installed Version Available Version ---------------------------------------- -------------------- -------------------- OAK 19.9.0.0.0 up-to-date GI 19.9.0.0.201020 up-to-date DB { [ OraDB18000_home1,OraDB18000_home2 ] 18.7.0.0.190716 18.12.0.0.201020 [ OraDB19000_home1,OraDB19000_home2 ] 19.9.0.0.201020 up-to-date } DCSAGENT 19.9.0.0.0 up-to-date ILOM 5.0.1.21.r136383 up-to-date BIOS 52030400 up-to-date OS 7.8 up-to-date FIRMWARECONTROLLER VDV1RL04 up-to-date FIRMWAREDISK 1102 up-to-date HMP 2.4.7.0.1 up-to-date
——————————————————————————–
As per the rdbms home, they can be patched later. If we are using a High Availability solution, both primary and standby databases’ homes need to be patched during same maintenance windows.
Post patching activities
We can now run post patching activities.
We will ensure there is no new hardware problem :
login as: root Keyboard-interactive authentication prompts from server: | Password: End of keyboard-interactive prompts from server Oracle(R) Integrated Lights Out Manager Version 5.0.1.21 r136383 Copyright (c) 2020, Oracle and/or its affiliates. All rights reserved. Warning: HTTPS certificate is set to factory default. Hostname: ODA01-ILOM -> show /SP/faultmgmt /SP/faultmgmt Targets: shell Properties: Commands: cd show ->
We can also remove our odabr snapshot backup :
[root@ODA01 ~]# export PATH=/opt/odabr:$PATH [root@ODA01 ~]# odabr infosnap -------------------------------------------------------- odabr - ODA node Backup Restore Author: Ruggero Citton RAC Pack, Cloud Innovation and Solution Engineering Team Copyright Oracle, Inc. 2013, 2019 Version: 2.0.1-47 -------------------------------------------------------- LVM snap name Status COW Size Data% ------------- ---------- ---------- ------ root_snap active 30.00 GiB 22.79% opt_snap active 60.00 GiB 34.37% u01_snap active 100.00 GiB 35.58% [root@ODA01 ~]# odabr delsnap INFO: 2020-11-04 19:31:46: Please check the logfile '/opt/odabr/out/log/odabr_81687.log' for more details INFO: 2020-11-04 19:31:46: Removing LVM snapshots INFO: 2020-11-04 19:31:46: ...removing LVM snapshot for 'opt' SUCCESS: 2020-11-04 19:31:46: ...snapshot for 'opt' removed successfully INFO: 2020-11-04 19:31:46: ...removing LVM snapshot for 'u01' SUCCESS: 2020-11-04 19:31:47: ...snapshot for 'u01' removed successfully INFO: 2020-11-04 19:31:47: ...removing LVM snapshot for 'root' SUCCESS: 2020-11-04 19:31:47: ...snapshot for 'root' removed successfully SUCCESS: 2020-11-04 19:31:47: Remove LVM snapshots done successfully
We can cleanup previous patching version from repository and give additionnal space to /opt :
[root@ODA01 ~]# df -h / /u01 /opt Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroupSys-LogVolRoot 30G 11G 18G 38% / /dev/mapper/VolGroupSys-LogVolU01 99G 59G 35G 63% /u01 /dev/mapper/VolGroupSys-LogVolOpt 75G 60G 12G 84% /opt [root@ODA01 ~]# odacli cleanup-patchrepo -comp GI,DB -v 19.6.0.0.0 { "jobId" : "97b9669b-6945-4358-938e-a3a3f3b73693", "status" : "Created", "message" : null, "reports" : [ ], "createTimestamp" : "November 04, 2020 19:32:16 PM CET", "resourceList" : [ ], "description" : "Cleanup patchrepos", "updatedTime" : "November 04, 2020 19:32:16 PM CET" } [root@ODA01 ~]# odacli describe-job -i "97b9669b-6945-4358-938e-a3a3f3b73693" Job details ---------------------------------------------------------------- ID: 97b9669b-6945-4358-938e-a3a3f3b73693 Description: Cleanup patchrepos Status: Success Created: November 4, 2020 7:32:16 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Cleanup Repository November 4, 2020 7:32:17 PM CET November 4, 2020 7:32:17 PM CET Success Cleanup JRE Home November 4, 2020 7:32:17 PM CET November 4, 2020 7:32:17 PM CET Success [root@ODA01 ~]# df -h / /u01 /opt Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroupSys-LogVolRoot 30G 11G 18G 38% / /dev/mapper/VolGroupSys-LogVolU01 99G 59G 35G 63% /u01 /dev/mapper/VolGroupSys-LogVolOpt 75G 49G 23G 68% /opt
We can restart our databases with srvctl start database command or srvctl start home command.
Finally we will activate our database synchronization if using Data Guard or dbvisit.