Building an Oracle infrastructure today without thinking about a Disaster Recovery solution (DR) is quite rare. This became obvious that a backup or a dump will not help if you do not know where to restore or import once your production server is down. And restoring a backup is definitely not the fastest way to bring back your database to life. As a consequence, Data Guard or Dbvisit Standby, depending on which edition you’re running, is a must have. And these tools are much more than Disaster Recovery solutions. You can use them for planned maintenance as well, or if you need to move your server to another datacenter for example.

Oracle Database Appliance does not deviate from that, and the bare minimum configuration is composed of 2 ODAs, 1 for production, and 1 for DR and test/dev. DR feature being implemented with Data Guard or Dbvisit Standby.

Using Data Guard or Dbvisit Standby also helps when it comes to patching. Because it’s a good practice to patch from time to time your ODAs. You may ask how to proceed when using Data Guard or Dbvisit Standby, here is how I do that since years.

The 3-step patch on ODA

You apply an ODA patch step-by-step:

  • a few pre-patches for updating the dcs components
  • a system patch for updating BIOS, ILOM, Operating System and Grid Infrastructure
  • a storage patch for updating data disk firmwares
  • a DB patch for updating DB homes, this one being applied multiple times if you have multiple DB homes

The patch can take quite a long time, you need to plan a minimum of 3/4 hours for a single-node ODA. If you add preparing the ODA prior patching, troubleshooting and doing the sanity checks after applying the patch, you much likely need 1 complete day for patching. Downtime may vary, but as at least one or 2 reboots are needed, I usually consider the full day of downtime. Yes this is huge, but this is real life.

Furthermore, if you don’t patch often enough, a single patch will probably not do the job. Patches are not always cumulative, and you sometimes need to apply 3 or 4 patches to upgrade to the latest version, significantly increasing the time to patch and the associated downtime.

As if it were not already complicated, you can encounter problems when patching, and get stuck for a while before finding a solution or a workaround. But don’t blame Oracle for that: who can bundle such a variety of updates (OS, BIOS, firmwares, ILOM, GI, DB) in just one patch? Oracle database has always been a powerfull RDBMS, but with a high degree of complexity. Adding the GI layer, Linux OS and hardware updates definitely makes patching a tough task.

Patching strategy when using a DR solution

Patching can be your nightmare… or not. It totally depends on how you manage these patches.

First of all, I would recommend to only patch an ODA where no primary is running on it. And this is only possible if you use Data Guard or Dbvisit Standby: plan a switchover of the primaries to the other ODA before patching. If something goes wrong during patching, or if it takes more time than planned, it won’t have any impact on your business. You may just miss your standby databases for hours, but this is normally something you can manage. Highly critical databases may use multiple standbys in order to keep maximum safety during patches.

I would also recommend to keep one test primary for each DB home on each ODA (I’m used to create a DBTEST on each ODA when deploying, and keep it for testing purpose). This primary will be patched to validate the complete patching process.

When you will apply the patch on this first ODA, you can eventually stop the patching process before patching the DB homes in order to keep them with the same version as the other ODA. Or you can decide to apply this DB home patch: it will update the binaries, but it will not be able to apply the datapatch on the databases themselves. It doesn’t matter AS SOON AS YOU DO NOT SWITCH BACK THE PRIMARIES TO THIS PATCHED ODA. If you decide to apply the DB home patches, and then do the switchover to this updated ODA, your binaries will not be in sync with the catalog anymore, and it could lead to several problems (especially regarding Java code inside the DB). So applying the complete patch is OK if your keep only standby databases on this ODA, until you patch the other ODA.

It’s not recommended to wait more than few weeks for patching the other ODA. So once you successfully patched the first one and you have waited few days to make sure everything is OK with this patch, switchover all the primaries to the patched ODA. Once done, you now need to first apply the DB home patch with update-dbhome on these primaries. You can also use a manual datapatch. This is mandatory to update the catalog to match the binaries’ version. Then apply the complete patch to your other ODA. Once done, both ODAs are up to date and you can dispatch your databases again on both servers.

I would highly recommend to verify on each database if the patch has been applied correctly. Sometimes a manual datapatch is needed, so don’t hesitate if something went wrong with odacli update-dbhome:

set serverout on
exec dbms_qopatch.get_sqlpatch_status;
Patch Id : 29972716
Action : APPLY
Action Time : 14-OCT-2020 10:59:29
Logfile :
Status : SUCCESS

And when reimaging?

Sometimes reimaging is a better idea than applying patches. Reimaging may be faster than applying several patches, and there is much less troubleshooting as you cleanse everything and restart from scratch. Reimaging can only be considered if you can switch all your primaries to another ODA for several days. You then need to remove the standby configuration from Data Guard (only removing the standby is OK), because your standby database will not be available for hours/days (actually you will rebuild it).

When you do the reimage, you cannot decide which patch you will apply on top of the DB home, it’s the one that will come with the global version. So you won’t be able to immediately switch back your primaries after patching. For example, patching from 18.5 to 19.11 with a reimaging (it makes sense because the gap is important) will bring a 18.14 DB home that can cause problems with a 18.5 database’s catalog.

Prior reimaging, I would recommend to backup to an external filesystem the controlfile autobackup of the standby, it also includes the latest spfile. Then you don’t need to create again the spfile from a primary pfile, and you will not forget to restore a STANDBY controlfile because your controlfile backup is already a standby controlfile.

And restoring the standby database with primary backup is OK if you don’t want to use a RMAN DUPLICATE.

Why can I safely run a mounted standby with different binaries but not using it as a primary?

Normaly you should only run binaries and database’s catalog with the same version. But running a standby with different binaries’ version is actually OK. This is because the catalog is not opened on a standby: catalog resides in SYSTEM’s tablespace, and a mounted database does not open any datafile. You may also notice that applying the datapatch on a primary is only possible if you can open your old database with newer binaries, unless you would never be able to update this catalog…

There is no risk opening a Standby in READ ONLY mode with different binaries, or if you’re using Active Guard option, but some advanced features may not work correctly.

Data Guard vs Dbvisit Standby

Data Guard being included with Enterprise Edition, there is no cost to have (at least) one standby for each primary (unless the cost of the storage). Don’t hesitate to give each primary a standby, for Disaster Recovery but also for being able to patch with this method.

Dbvisit licensing metric is per database, so you may only consider using standby for productions. But as Standard Edition and Dbvisit Standby are quite inexpensive (compared to Enterprise Edition), buying extra licenses for test and dev databases is definitely a brilliant idea in order to release your ODA from primaries when it comes to patching.


Data Guard and Dbvisit Standby are much more than DR solutions: they simplify the patching of your ODAs, and make reimaging possible. This definitely improves your ODAs lifecycle management.