Blog - comments

Hi Stephane,It depends, you can find the stats for MySQL 5.6 below:sysbench 0.5: multi-threaded sys...
Gregory Steulet

Hi Gregory, how many times does it take to prepare the 20M rows ? It seems take so long time...

Overall, from my point of view there is one key issue: There are Oracle installations with standard ...
Hi Guys, I tried for users tablespace and was able to do the below to recover the datafile in pdbs.P...
Harsha C R
Hi Mark, I fully understand your point... With Standard Edition and Standard Edition ONE, if you are...
Gregory Steulet
Blog Michael Schwalm Oracle 12c: Applying PSU with Multitenant DB & unplug/plug

dbi services Blog

Welcome to the dbi services Blog! This IT blog focuses on database, middleware, and OS technologies such as Oracle, Microsoft SQL Server & SharePoint, EMC Documentum, MySQL, PostgreSQL, Sybase, Unix/Linux, etc. The dbi services blog represents the view of our consultants, not necessarily that of dbi services. Feel free to comment on our blog postings.

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that have been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.

Oracle 12c: Applying PSU with Multitenant DB & unplug/plug

The concept of Multitenant databases, which was introduced with Oracle 12c in June 2013, allows to run several databases on a single instance. Oracle presents this feature as a good solution for Oracle patching. The reason behind it is that it is now possible to unplug a container database (called PDB) from its original container (called CDB), in order to plug it into a new local or remote Container with a higher level of PSU. In this post, I will show how you can install the new PSU for Oracle 12c (released in October 2013) using Multitenant databases to keep the downtime as low as possible.

The following schema shows the concept:




We assume that a company is using a Container database CDB1 with two pluggable databases PDB1 and PDB2 running on it:


SQL> select name, open_mode from v$pdbs;


NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE


These two databases are accessed by users and applications and a window maintenance is planned in order to install the latest PSU for Oracle 12c.

Using the OPatch utility, we can see the currently installed patches:


$ opatch lsinventory




We can see that the patch 16527374 is already installed. It has been installed in order to fix a bug with Enterprise Manager Express 12c and Multitenant databases. Also note the presence of the latest OPatch utility

To unplug PDB2 from the container CDB1 and plug it into a new CDB with a higher PSU level, we need to install a second rdbms software with the same patching level ( + EM Express bug fix), on which the new PSU will be installed.

An empty container database CDB2 has been created on the second environment, with the Oracle Home /u00/app/oracle/product/12.1.0/db_2. Now it is time to install the new PSU on the second environment. The listener for this installation is named LISTENER_DB_2:


Step 1: Upgrade OPatch utility to the latest version (see note 6880880) if not already done.


Step 2: Download and unzip the PSU on the server (patch 17027533).

$ unzip -d /tmp


Step 3: Check conflicts between the PSU and already installed patches.

$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./




The patch already installed to fix the EM Express bug is in conflict with the new PSU. Oracle will remove the existing patch before installing the new PSU. This is not an issue here, since Oracle provides the patch 16527374 specific to Oracle It will be possible to reinstall this patch once the PSU is applied.


Step 4: Shut down all databases and listeners running on the Oracle Home of the second environment.

Shut down CDB2:

SQL> shutdown immediate;


I did not create any listener for my second environment. If you plan to drop the current ORACLE_HOME after upgrade, you will have to create a new listener in the new ORACLE_HOME and to shut it down before upgrading.


Step 5: Install the new PSU.

$ cd /tmp/17027533
$ opatch apply



Note that the patch for EM Express bug fix has to be reinstalled for release, since it has been removed during the PSU install. This bug is not covered by the PSU.


Step 6: Restart databases and listeners.

Restart the CDB2 database:

SQL> startup;


Step 7: Load modified SQL files into the database with Datapatch tool.


$ cd $ORACLE_HOME/OPatch$ ./datapatch -verbose




It is possible that the following error occurs:

DBD::Oracle::st execute failed: ORA-20001: Latest xml inventory is not loaded into table


In this case, the parameter _disable_directory_link_check must be set to TRUE (see Oracle note 1602089.1) and the database must be restarted:

alter system set "_disable_directory_link_check"=TRUE scope=spfile;


We have now CDB1 running in PSU level, and CDB2 running in PSU level. Until now, no downtime occured on PDB1 and PDB2 databases. All configuration and installation steps for the PSU have been performed on a non-productive environment.

The next steps consist in unplugging PDB2 from CDB1, in order to plug it into CDB2.


Step 8: Stop the user application and shutdown PDB2 from CDB1.

From this step on, the pluggable database must be closed. The downtime will start now.


SQL> connect sys/manager@CDB1 as sysdba


Session altered.


Pluggable database altered.


Step 9: Unplug PDB2 from CDB1.


Session altered.


Pluggable database altered.


Step 10: Plug PDB2 into CDB2.


SQL> connect sys/manager@CDB2 as sysdba


     USING '/u01/oradata/CDB1/PDB2/PDB2.xml'
     MOVE FILE_NAME_CONVERT = ('/u01/oradata/CDB1/PDB2','/u01/oradata/CDB2/PDB2');
Pluggable database created.


The use of the MOVE clause makes the new pluggable database creation very quick, since the database files are not copied but only moved on the file system. This operation is immediate if using the same file system.


Session altered.


Pluggable database altered.


The database PDB2 is now opened, and users can access to the database. Note that if installing CDB2 on a different host, users may have to update the TNS connect string.


Step 11: Load modified SQL files into the database with Datapatch tool.

Since the rdbms has been upgraded to before any pluggable database has been plugged into CDB2, all newly plugged databases must execute the "datapatch" script in order to load the modified SQL files.

Run the following command once the CDB2 environment is set:


$ cd $ORACLE_HOME/OPatch
$ ./datapatch -verbose




Important: If you are using static listener registration, do not forget to change your listener.ora in order to provide the true ORACLE_HOME path corresponding to the new environment.

The pluggable database is now fully ready and downtime only occured between steps 8 and 10. By using the MOVE clause, the only downtime corresponds to the time required for shutting down the database from the source CDB and starting the database on the destination CDB. It represents a few seconds... And if both CDB are running on the same host, the users will not have to update their TNS connect string in order to access the database.

Patching PDBs using this method might ease the DBA life in the future :-)

Rate this blog entry:

Michael Schwalm is Consultant at dbi Services and has more than two years of experience in Oracle database administration. He has a broad knowledge in the realization of virtualization infrastructures such as vMware vSphere. He took his first steps in database administration as an integrator of a web applications on Unix, Oracle, and Websphere environments. Michael Schwalm is Oracle Certified Professional 11g and RAC Implementation Specialist 11g. Prior to joining dbi services, Michael Schwalm was application administrator at SOGETI Est (F) on behalf of PSA Peugeot Citroen and responsible for the realization and managing of Unix environments and Oracle databases in the context of migration projects. Michael Schwalm holds a BTS diploma in Information System Management from Belfort (F) and a TSAR diploma in advanced network administration from Strasbourg (F). His branch-related experience covers Automotive, Software industry, Financial Services / Banking, etc.


  • No comments made yet. Be the first to submit a comment

Leave your comment

Guest Wednesday, 29 July 2015
AddThis Social Bookmark Button
Deutsch (DE-CH-AT)   French (Fr)


Contact us now!

Send us your request!

Our workshops

dbi FlexService SLA - ISO 20000 certified.

dbi FlexService SLA ISO 20000

Expert insight from insiders!

Fixed Price Services

dbi FlexService SLA - ISO 20000 certified.

dbi FlexService SLA ISO 20000

A safe investment: our IT services at fixed prices!

Your flexible SLA

dbi FlexService SLA - ISO 20000 certified.

dbi FlexService SLA ISO 20000

ISO 20000 certified & freely customizable!

dbi services Newsletter