Introduction
The patching of an Oracle Database Appliance needs to be secured. This is because it won’t limit to patching the databases: there is plenty of other components embedded with the patch, the goal being to keep everything updated. A possible rollback may be needed.
For sure, there is no possible rollback for firmwares, BIOS and ILOM (it’s unlikely you would need to go back to previous version), but you may need to rollback when it comes to patching the OS and the GI stack. Unlike DB homes, OS and GI stack are located on the ODA local disks, and can be protected during patching with LVM snapshots. This is the purpose of ODABR: making LVM snapshots of /, /u01 and /opt before applying the patch. If something goes wrong, you can revert to the previous stable state.
ODABR will need enough space on local disks for creating the snapshots, and sometimes available free space is not enough. How to deal with that?
When taking the snapshots?
Before using ODABR, I would recommend doing a cleanup of the following filesystems: /, /u01 and /opt. You will need 20+% of free space in these filesystems, otherwise the patch prechecks won’t give you the green light. ODABR is based on Copy-On-Write technology, meaning that old versions of the changed blocks will be kept for going back in time if needed. Don’t unzip the patch files after taking the snapshots if the patch files reside on the local disks! Once you’re ready to patch, the minute before registering the patch and applying the DCS components update, you can use ODABR to take the snapshots.
When releasing the snapshots?
Once you successfully applied the system patch and GI patch, and once you verified that everything runs fine with the new GI binaries, you can safely use ODABR to delete the snapshots. If you need to apply multiple patches, make new snapshots before each jump, you will never need to revert to the oldest version once you are in a stable intermediate version.
Are the ODABR snapshots useful for DB homes?
Absolutely not. DB homes are located under /u01/app/odaorahome, a dedicated ACFS filesystem on the DATA diskgroup. You don’t need snapshot features on this very filesystem because patching a DB home will create a new home and move the database into this new DB home, the old one being kept as long as you want to. You can delete older DB homes months after patching was done. I usually delete old ones as part of the patching job once everything is fine, but I always backup the old DB home with a tar czf in case of. If you need to revert a database to an old DB home, you can do it manually and use a RMAN backup to restore the database, or eventually rollback the datapatch if needed. This is rarely used.
Location of the snapshots
ODABR relies on LVM snapshots, not something really specific to ODA. Snapshots are stored in the remaining space of the Volume Group. Most ODAs were sold with 2x 500GB local disks (SSDs) protected with a software RAID and allocated to a single Volume Group VolGroupSys. When doing the ODA setup, only a part of the storage is allocated to the system volumes as Logical Volumes (LogVolSwap, LogVolOpt, LogVolU01, LogVolRoot), meaning that a comfortable amount of space is available for ODABR:
pvs
PV VG Fmt Attr PSize PFree
/dev/md126p3 VolGroupSys lvm2 a-- 446.09g 285.09g
/opt/odabr/odabr backup -snap
...
SUCCESS: 2026-04-22 17:10:33: LVM snapshots backup done successfully
pvs
PV VG Fmt Attr PSize PFree
/dev/md126p3 VolGroupSys lvm2 a-- 446.09g 150.09g
ODABR additionally backups ASM metadata, because if you revert to a previous set of LVM snapshots with an older GI release, your ASM metadata would also need to be in the corresponding version.
Default snapshot sizes and real needs
Default snapshots size is the size of the Logical Volumes, here is an example for a 30GB /, a 55GB /opt and a 50GB /u01:
/opt/odabr/odabr infosnap
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.2-06
Copyright 2013, 2025, Oracle and/or its affiliates.
--------------------------------------------------------
RACPack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
LVM snap name status COW Size Data%
------------- ---------- ---------- ------
root_snap active 30.00 GiB 0.01%
opt_snap active 55.00 GiB 0.01%
u01_snap active 50.00 GiB 0.01%
This is quite confortable, there won’t be 100% blocks changed during the patching, for sure. If you don’t have these 135GB of free space, you can specify lower values for the snapshots, but make sure to limit modifications on the files residing on these 3 Logical Volumes during the patch:
/opt/odabr/odabr backup -snap -osize 18 -rsize 10 -usize 25
These settings were enough for my latest patches.
The ODA X9-2 series exception
As far as I remember, ODAs always had a confortable system disk sizes. 500GB is large enough for OS, odacli and GI stack plus a couple of files of your own. Since version 19.10, DB homes and diagnostic destination are now located within ACFS volumes, meaning that it doesn’t take a single MB from the local disks:
df -h | grep -e Filesystem -e u01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01 49G 27G 20G 58% /u01
/dev/asm/odabase_n0-225 30G 2.5G 28G 9% /u01/app/odaorabase0
/dev/asm/orahome_sh-225 80G 44G 37G 54% /u01/app/odaorahome
As 19.10 was already available when X9-2 was released, Oracle decided to decrease the system disks from 500GB to 250GB, meaning that you didn’t have the margin you had with the previous generations. The problem was solved with X10 series and onwards, system disks are now back to 500GB as they were on X8-2 series.
What I mean is that you’ll be most probably concerned about filesystem contention for ODABR if you have X9-2 series.
How can you deal with insufficient disk space when using ODABR?
Imagine you have this kind of configuration:
pvs
PV VG Fmt Attr PSize PFree
/dev/md126p3 VolGroupSys lvm2 a-- 222.56g 2.56g
On this ODA, /opt and /u01 have been extended for some reasons, and there is almost no space left on the Volume Group.
If you try to use ODABR to take snapshots, you will get this error:
Available LVM size X is less than required snapshot size Y
2.56GB is way too small for using ODABR: you will need to reduce the size of /u01, /opt or both. First, a cleanup is needed to free up some space. Once done, you’ll be able to reduce the size of /u01 and/or /opt. But these filesystems will need to be unmounted. And it cannot be done without stopping/starting some processes. Actually, pretty much all the processes running on your ODA.
Stopping the processes and reducing the filesystems
Let’s reduce the /u01 and /opt Logical Volumes’ size in this example:
su - root
df -h /u01 /opt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01 61G 23G 36G 39% /u01
/dev/mapper/VolGroupSys-LogVolOpt 70G 48G 22G 68% /opt
export ORACLE_HOME=/u01/app/19.30.0.0/grid
$ORACLE_HOME/bin/crsctl stop crs
lvreduce -L 50G /dev/VolGroupSys/LogVolU01 -r
Do you want to unmount "/u01" ? [Y|n] y
umount: /u01: target is busy.
fsadm: Cannot proceed with mounted filesystem "/u01".
/usr/sbin/fsadm failed: 1
Filesystem resize failed.
lsof /u01/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 23727 root 26r REG 252,3 16303574 1208968 /u01/app/19.30.0.0/grid/jlib/srvm.jar
kill -9 23727 ; umount /u01
umount: /u01: target is busy.
kill -9 23727 ; umount /u01
-bash: kill: (23727) - No such process
lvreduce -L 50G /dev/VolGroupSys/LogVolU01 -r
fsck from util-linux 2.32.1
/dev/mapper/VolGroupSys-LogVolU01: 76718/4063232 files (7.3% non-contiguous), 6220430/16252928 blocks
resize2fs 1.46.2 (28-Feb-2021)
Resizing the filesystem on /dev/mapper/VolGroupSys-LogVolU01 to 13107200 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 13107200 (4k) blocks long.
Size of logical volume VolGroupSys/LogVolU01 changed from 62.00 GiB (1984 extents) to 50.00 GiB (1600 extents).
Logical volume VolGroupSys/LogVolU01 successfully resized.
systemctl stop initdcsagent
systemctl stop initdcscontroller
systemctl stop initdcsadmin
systemctl stop oda-mysql
systemctl stop oracle-ODA_DCS-ODA_DCS0
systemctl stop oracle-tfa
systemctl stop oracle-ohasd
lsof /opt
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 6697 root cwd DIR 252,2 4096 1703937 /opt/dbi
OSWatcher 7799 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
OSWatcher 7799 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 7799 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 7799 grid 255r REG 252,2 65286 2235047 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb/OSWatcher.sh
OSWatcher 9903 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
OSWatcher 9903 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 9903 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
OSWatcher 9903 grid 255r REG 252,2 8035 2235046 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb/OSWatcherFM.sh
sleep 56757 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
sleep 56757 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep 56757 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep 56835 grid cwd DIR 252,2 4096 2235031 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/oswbb
sleep 56835 grid 1w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
sleep 56835 grid 2w REG 252,2 4703 2228894 /opt/oracle/dcs/oracle.ahf/data/repository/suptools/dbioda01/oswbb/grid/run_1773762195.log (deleted)
kill -9 7799 9903
lvreduce -L 55G /dev/VolGroupSys/LogVolOpt -r
Do you want to unmount "/opt" ? [Y|n] y
fsck from util-linux 2.32.1
/dev/mapper/VolGroupSys-LogVolOpt: Inode 655911 extent tree (at level 1) could be narrower. IGNORED.
/dev/mapper/VolGroupSys-LogVolOpt: 57911/4587520 files (2.4% non-contiguous), 12785629/18350080 blocks
resize2fs 1.46.2 (28-Feb-2021)
Resizing the filesystem on /dev/mapper/VolGroupSys-LogVolOpt to 14417920 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolOpt is now 14417920 (4k) blocks long.
Size of logical volume VolGroupSys/LogVolOpt changed from 70.00 GiB (2240 extents) to 55.00 GiB (1760 extents).
Logical volume VolGroupSys/LogVolOpt successfully resized.
As I don’t want to manually restart the various processes, and as I usually do a sanity reboot before patching, let’s reboot the server. Everything will then be properly started after.
reboot
...
df -h /u01 /opt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01 49G 23G 25G 49% /u01
/dev/mapper/VolGroupSys-LogVolOpt 54G 48G 4.0G 93% /opt
Further recommendations
You don’t really need a lot of space on the system disks. Put your patch files and other specific files on a NFS volume shared across your ODAs, and keep the system disks to their default’s size to keep a confortable margin for ODABR, especially on ODA X9-2 series. Eventually, allow a few more GB to /u01 and/or /opt but keep at least 70GB free for ODABR snapshots. Anticipate the patching and test ODABR several days before, it does not cost anything, and snapshots can be removed immediately without any downtime. Worst case is having to patch now and discovering that there is no space left on disks for using ODABR.
Not being able to secure your patch with ODABR is the second NOGO for patching, the first one being a faulty component reported by odaadmcli show server.
Conclusion
If you don’t have enough free space for ODABR snapshots, postpone the patching. ODABR is a mandatory safety net. Reducing /u01 and /opt filesystems’s size is possible. It will require stopping everything, but as you will patch your ODA, you’ve already allocated a sufficient downtime.