In this article, I will show you how to convert a SI (Single Instance) database to a RAC database on an ODA using rconfig. Of course this is only possible with the ODA 2-HA model as it comes with a 2 nodes cluster. There is no way to do RAC database on the ODA light model. I will also ensure that there is no impact to the other databases running in the same oracle home. This is why, I will create 2 test databases with the same oracle home. Please note that there will be downtime knowing the database needs to be restarted.
Read more: Convert Single Instance database to RAC on an ODAPreparation
I will first create a new oracle home on the existing ODA.
[root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED [root@node0 ~]# odacli create-dbhome -de EE -v 19.23.0.0.240416 Job details ---------------------------------------------------------------- ID: 0f02be31-a2b5-4ba1-af66-76c83d9808f2 Description: Database Home OraDB19000_home5 creation with version :19.23.0.0.240416 Status: Created Created: November 26, 2024 4:02:27 PM CET Message: Create Database Home Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 0f02be31-a2b5-4ba1-af66-76c83d9808f2 Job details ---------------------------------------------------------------- ID: 0f02be31-a2b5-4ba1-af66-76c83d9808f2 Description: Database Home OraDB19000_home5 creation with version :19.23.0.0.240416 Status: Success Created: November 26, 2024 4:02:27 PM CET Message: Create Database Home Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Setting up SSH equivalence node0 November 26, 2024 4:02:49 PM CET November 26, 2024 4:02:54 PM CET Success Setting up SSH equivalence node0 November 26, 2024 4:02:54 PM CET November 26, 2024 4:02:58 PM CET Success Creating ACFS database home node0 November 26, 2024 4:02:58 PM CET November 26, 2024 4:02:58 PM CET Success Validating dbHome available space node0 November 26, 2024 4:02:59 PM CET November 26, 2024 4:02:59 PM CET Success Validating dbHome available space node1 November 26, 2024 4:02:59 PM CET November 26, 2024 4:02:59 PM CET Success Creating DbHome Directory node1 November 26, 2024 4:03:01 PM CET November 26, 2024 4:03:01 PM CET Success Create required directories node0 November 26, 2024 4:03:01 PM CET November 26, 2024 4:03:01 PM CET Success Extract DB clone node0 November 26, 2024 4:03:02 PM CET November 26, 2024 4:05:00 PM CET Success ProvDbHome by using RHP node0 November 26, 2024 4:05:00 PM CET November 26, 2024 4:08:20 PM CET Success Enable DB options node0 November 26, 2024 4:08:21 PM CET November 26, 2024 4:08:48 PM CET Success Creating wallet for DB Client node0 November 26, 2024 4:09:02 PM CET November 26, 2024 4:09:03 PM CET Success [root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 OraDB19000_home5 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 CONFIGURED
I will create 2 new databases on this new dbhome : TEST1 and TEST2.
[root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b [root@node0 ~]# odacli create-database -n TEST1 -u TEST1 -y SI -g 0 -cl OLTP -no-c -no-co -cs UTF8 -ns AL16UTF16 -l AMERICAN -dt AMERICA -dh 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 -s odb1 -r ACFS Enter SYS and SYSTEM user password: Retype SYS and SYSTEM user password: Job details ---------------------------------------------------------------- ID: 88cd622d-7896-4a7c-b2d5-bb113438b2d2 Description: Database service creation with DB name: TEST1 Status: Created Created: November 26, 2024 4:12:37 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 88cd622d-7896-4a7c-b2d5-bb113438b2d2 Job details ---------------------------------------------------------------- ID: 88cd622d-7896-4a7c-b2d5-bb113438b2d2 Description: Database service creation with DB name: TEST1 Status: Success Created: November 26, 2024 4:12:37 PM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Setting up SSH equivalence node0 November 26, 2024 4:12:45 PM CET November 26, 2024 4:12:49 PM CET Success Setting up SSH equivalence node0 November 26, 2024 4:12:49 PM CET November 26, 2024 4:12:52 PM CET Success Creating volume dclTEST1 node0 November 26, 2024 4:12:53 PM CET November 26, 2024 4:13:14 PM CET Success Creating volume datTEST1 node0 November 26, 2024 4:13:14 PM CET November 26, 2024 4:13:35 PM CET Success Creating ACFS filesystem for DATA node0 November 26, 2024 4:13:36 PM CET November 26, 2024 4:14:06 PM CET Success Database Service creation node0 November 26, 2024 4:14:10 PM CET November 26, 2024 4:27:39 PM CET Success Database Creation by RHP node0 November 26, 2024 4:14:10 PM CET November 26, 2024 4:24:27 PM CET Success Change permission for xdb wallet files node1 November 26, 2024 4:24:29 PM CET November 26, 2024 4:24:32 PM CET Success Place SnapshotCtrlFile in sharedLoc node0 November 26, 2024 4:24:33 PM CET November 26, 2024 4:24:37 PM CET Success SqlPatch upgrade node0 November 26, 2024 4:26:06 PM CET November 26, 2024 4:26:28 PM CET Success Running dbms_stats init_package node0 November 26, 2024 4:26:29 PM CET November 26, 2024 4:26:32 PM CET Success Set log_archive_dest for Database node0 November 26, 2024 4:26:32 PM CET November 26, 2024 4:26:34 PM CET Success Updating the Database version node1 November 26, 2024 4:26:35 PM CET November 26, 2024 4:26:40 PM CET Success Create Users tablespace node0 November 26, 2024 4:27:39 PM CET November 26, 2024 4:27:42 PM CET Success Clear all listeners from Database node0 November 26, 2024 4:27:43 PM CET November 26, 2024 4:27:44 PM CET Success Copy Pwfile to Shared Storage node0 November 26, 2024 4:27:47 PM CET November 26, 2024 4:27:50 PM CET Success Configure All Candidate Nodes node0 November 26, 2024 4:27:50 PM CET November 26, 2024 4:27:52 PM CET Success [root@node0 ~]# odacli create-database -n TEST2 -u TEST2 -y SI -g 0 -cl OLTP -no-c -no-co -cs UTF8 -ns AL16UTF16 -l AMERICAN -dt AMERICA -dh 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 -s odb1 -r ACFS Enter SYS and SYSTEM user password: Retype SYS and SYSTEM user password: Job details ---------------------------------------------------------------- ID: 95f53872-96c8-4eb4-903f-f9e5a4701db5 Description: Database service creation with DB name: TEST2 Status: Created Created: November 26, 2024 4:29:45 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 95f53872-96c8-4eb4-903f-f9e5a4701db5 Job details ---------------------------------------------------------------- ID: 95f53872-96c8-4eb4-903f-f9e5a4701db5 Description: Database service creation with DB name: TEST2 Status: Success Created: November 26, 2024 4:29:45 PM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Setting up SSH equivalence node0 November 26, 2024 4:29:54 PM CET November 26, 2024 4:29:58 PM CET Success Setting up SSH equivalence node0 November 26, 2024 4:29:58 PM CET November 26, 2024 4:30:02 PM CET Success Creating volume dclTEST2 node0 November 26, 2024 4:30:03 PM CET November 26, 2024 4:30:26 PM CET Success Creating volume datTEST2 node0 November 26, 2024 4:30:26 PM CET November 26, 2024 4:30:51 PM CET Success Creating ACFS filesystem for DATA node0 November 26, 2024 4:30:51 PM CET November 26, 2024 4:31:24 PM CET Success Database Service creation node0 November 26, 2024 4:31:28 PM CET November 26, 2024 4:44:52 PM CET Success Database Creation by RHP node0 November 26, 2024 4:31:28 PM CET November 26, 2024 4:41:51 PM CET Success Change permission for xdb wallet files node1 November 26, 2024 4:41:53 PM CET November 26, 2024 4:41:55 PM CET Success Place SnapshotCtrlFile in sharedLoc node0 November 26, 2024 4:41:55 PM CET November 26, 2024 4:42:00 PM CET Success SqlPatch upgrade node0 November 26, 2024 4:43:26 PM CET November 26, 2024 4:43:47 PM CET Success Running dbms_stats init_package node0 November 26, 2024 4:43:47 PM CET November 26, 2024 4:43:49 PM CET Success Set log_archive_dest for Database node0 November 26, 2024 4:43:49 PM CET November 26, 2024 4:43:52 PM CET Success Updating the Database version node1 November 26, 2024 4:43:52 PM CET November 26, 2024 4:43:57 PM CET Success Create Users tablespace node0 November 26, 2024 4:44:52 PM CET November 26, 2024 4:44:56 PM CET Success Clear all listeners from Database node0 November 26, 2024 4:44:57 PM CET November 26, 2024 4:44:59 PM CET Success Copy Pwfile to Shared Storage node0 November 26, 2024 4:45:02 PM CET November 26, 2024 4:45:05 PM CET Success Configure All Candidate Nodes node0 November 26, 2024 4:45:05 PM CET November 26, 2024 4:45:07 PM CET Success
So we have 2 new databases TEST1 and TEST2 on new oracle home dbhome_5.
[root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 OraDB19000_home5 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 CONFIGURED [root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 4358287c-9cf0-45d4-a7e3-a59f933e86b2 TEST1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 110f26e7-f9f3-412e-9443-a201d24201a0 TEST2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 [root@node0 ~]#
Check TEST1 and TEST2 Single Instance databases
TEST1 database is opened READ/WRITE.
oracle@node0:~/ [rdbms1900] TEST1 ****************************************************** INSTANCE_NAME : TEST1 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 09:25:43
The name of the instance is the name of the database, TEST1. It is a single instance and we choosed the node0 as node (option -g 0 from odacli command). This is confirmed we only have one instance running on node 0.
oracle@node0:~/ [TEST1] ps -ef | grep -i [p]mon | grep -i test1 oracle 22300 1 0 Nov26 ? 00:00:03 ora_pmon_TEST1 oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST1] srvctl status database -d TEST1 Instance TEST1 is running on node node0
Checking grid cluster configuration, we see that we only have one instance and 2 configured nodes :
Database instance: TEST1
Configured nodes: node0,node1
oracle@node0:~/ [TEST1] srvctl config database -d TEST1 Database unique name: TEST1 Database name: TEST1 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST1/dbs/spfileTEST1.ora Password file: /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: DATA Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: SINGLE OSDBA group: dba OSOPER group: dbaoper Database instance: TEST1 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed
And we can check and see that the cluster_database instance parameter is set to FALSE.
oracle@node0:~/ [TEST1] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 09:32:33 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database FALSE SQL>
The same can be checked on the TEST2 database.
oracle@node0:~/ [TEST1] TEST2 ****************************************************** INSTANCE_NAME : TEST2 DB_NAME : TEST2 DB_UNIQUE_NAME : TEST2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/5 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 09:33:30 oracle@node0:~/ [TEST2] ps -ef | grep -i [p]mon | grep -i test2 oracle 89478 1 0 Nov26 ? 00:00:03 ora_pmon_TEST2 oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST2] srvctl status database -d TEST2 Instance TEST2 is running on node node0 oracle@node0:~/ [TEST2] srvctl config database -d TEST2 Database unique name: TEST2 Database name: TEST2 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST2/dbs/spfileTEST2.ora Password file: /u02/app/oracle/oradata/TEST2/dbs/orapwTEST2 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: DATA Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST2,/u03/app/oracle/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: SINGLE OSDBA group: dba OSOPER group: dbaoper Database instance: TEST2 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed oracle@node0:~/ [TEST2] oracle@node0:~/ [TEST2] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 09:34:40 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database FALSE SQL>
Convert TEST1 database to RAC using rconfig
rconfig template
rconfig is using a XML configuration file. Templates can be found in the oracle dbhome.
oracle@node0:~/ [TEST1] ls -ltrh $ORACLE_HOME/assistants/rconfig/sampleXMLs total 8.0K -rw-r----- 1 oracle oinstall 2.6K Mar 9 2018 ConvertToRAC_PolicyManaged.xml -rw-r----- 1 oracle oinstall 2.5K Jul 16 2018 ConvertToRAC_AdminManaged.xml
We will use the ConvertToRAC_AdminManaged.xml template and adapt it for our need into a new XML file, named ConvertToRAC_TEST1.xml, in order to convert TEST1 database.
oracle@node0:~/mwagner/ [TEST1] mkdir rconfig_xml oracle@node0:~/mwagner/ [TEST1] cd rconfig_xml oracle@node0:~/mwagner/rconfig_xml/ [TEST1] cp -p $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC_AdminManaged.xml ./ConvertToRAC_TEST1.xml oracle@node0:~/mwagner/rconfig_xml/ [TEST1] vi ConvertToRAC_TEST1.xml
Let’s list the XML file that has been updated and that will be used for the conversion.
oracle@node0:~/mwagner/rconfig_xml/ [TEST1] cat ConvertToRAC_TEST1.xml <?xml version="1.0" encoding="UTF-8"?> <n:RConfig xmlns:n="http://www.oracle.com/rconfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.oracle.com/rconfig rconfig.xsd"> <n:ConvertToRAC> <!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY --> <n:Convert verify="ONLY"> <!--Specify current OracleHome of non-rac database for SourceDBHome --> <n:SourceDBHome>/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5</n:SourceDBHome> <!--Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome --> <n:TargetDBHome>/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5</n:TargetDBHome> <!--Specify SID of non-rac database --> <n:SourceDBInfo SID="TEST1"/> <!--Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist. --> <n:NodeList> <n:Node name="node0"/> <n:Node name="node1"/> </n:NodeList> <!--Specify RacOneNode along with servicename to convert database to RACOne Node --> <!--n:RacOneNode servicename="salesrac1service"/--> <!--Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name.--> <n:InstancePrefix></n:InstancePrefix> <!-- Listener details are no longer needed starting 11.2. Database is registered with default listener and SCAN listener running from Oracle Grid Infrastructure home. --> <!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. ASM credentials are no needed for conversion. --> <n:SharedStorage type="ASM"> <!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. --> <n:TargetDatabaseArea></n:TargetDatabaseArea> <!--Specify Fast Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. --> <n:TargetFlashRecoveryArea></n:TargetFlashRecoveryArea> </n:SharedStorage> </n:Convert> </n:ConvertToRAC> </n:RConfig>
Some of the xml tags are quite easy to understand, others would need to be handled carefully.
TAG | Explanation |
---|---|
Convert verify | Can be YES, NO or ONLY. If it is set to ONLY rconfig will just check if the conversion is possible but will not run it. In case of YES or NO, the conversion will be run. I would stongly recommend to use YES and to avoid NO |
SourceDBHome | DB Home used by the database to convert, in our case: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 |
TargetDBHome | DB Home the RAC database will use. We will chose to keep same one. |
SourceDBInfo SID | Database SID, in our case TEST1 |
NodeList Node name | List of all node hostname |
RacOneNode servicename | N/A, as you can see, this tag has been put in comment. Note !– |
InstancePrefix | Prefix to be used to name the instances. In our case we let it blank, so it will use the db_unique_name as prefix and name the instances as db_unique_name[1-X]: TEST11 TEST12 |
SharedStorage type | Here, we need to pay attention. We are using ACFS, so we need to ensure we enter ASM. Configuring it incorrectly might delete all the database files. |
TargetDatabaseArea | Here, we need to pay attention too. We are using ACFS, so we need to ensure to leave it blank. Referring ACFS database files directory, will incorrectly delete all the database files. |
TargetFlashRecoveryArea | We keep it blank as we will reuse the same recovery area. |
Please pay attention of both SharedStorage type and TargetDatabaseArea parameter.
Check/test conversion
We will ensure that the convert verify value is set to ONLY in the xml template, so rconfig will only check if the conversion is possible, and will not run any conversion.
oracle@node0:~/mwagner/rconfig_xml/ [TEST1] grep -i "Convert verify" ConvertToRAC_TEST1.xml <n:Convert verify="ONLY"> oracle@node0:~/mwagner/rconfig_xml/ [TEST1]
I will now run rconfig.
oracle@node0:~/mwagner/rconfig_xml/ [TEST1] cd $ORACLE_HOME/bin oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] ./rconfig ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml Specify sys user password for the database <?xml version="1.0" ?> <RConfig version="1.1" > <ConvertToRAC> <Convert> <Response> <Result code="0" > Operation Succeeded </Result> </Response> <ReturnValue type="object"> There is no return value for this step </ReturnValue> </Convert> </ConvertToRAC></RConfig>
The rconfig conversion check has been successfully executed, and the database is ready to be converted. Note the result code value to be 0 and the message “Operation Succeeded”.
Convert the database
So we can now convert the database. We will change the template and have “convert verify” setup to YES.
oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] vi ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] grep -i "Convert verify" ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml <n:Convert verify="YES">
And we can run the conversion.
oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1] ./rconfig ~/mwagner/rconfig_xml/ConvertToRAC_TEST1.xml Specify sys user password for the database Converting Database "TEST1.swisslos.local" to Cluster Database. Target Oracle Home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5. Database Role: PRIMARY. Setting Data Files and Control Files Adding Trace files Adding Database Instances Create temporary password file Adding Redo Logs Enabling threads for all Database Instances Setting TEMP tablespace Adding UNDO tablespaces Setting Fast Recovery Area Updating Oratab Creating Password file(s) Configuring related CRS resources Starting Cluster Database <?xml version="1.0" ?> <RConfig version="1.1" > <ConvertToRAC> <Convert> <Response> <Result code="0" > Operation Succeeded </Result> </Response> <ReturnValue type="object"> <Oracle_Home> /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 </Oracle_Home> <Database type="ADMIN_MANAGED" > <InstanceList> <Instance SID="TEST11" Node="node0" > </Instance> <Instance SID="TEST12" Node="node1" > </Instance> </InstanceList> </Database> </ReturnValue> </Convert> </ConvertToRAC></RConfig> oracle@node0:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/bin/ [TEST1]
Note the result code value to be 0 and the message “Operation Succeeded”. This means the conversion has been executed successfully.
UNDO and REDO creation
As expected, knowing we are now having 2 instances, we need 2 UNDO tablespace and 2 sets of ONLINE LOG. We can see that an additionnal UNDO tablespace has been created.
We can check this on the file system and see additionnal UNDO tablespace file.
Before:
oracle@node1:~/ [rdbms192300_a] ls -ltrh /u02/app/oracle/oradata/TEST1/TEST1/datafile/ total 2.4G -rw-r----- 1 oracle asmadmin 5.1M Nov 26 16:32 o1_mf_users_mnct7fwn_.dbf -rw-r----- 1 oracle asmadmin 252M Nov 26 22:06 o1_mf_temp_mncso2l7_.tmp -rw-r----- 1 oracle asmadmin 1011M Nov 27 10:18 o1_mf_sysaux_mncslcxc_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:18 o1_mf_undotbs1_mncslvbn_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 10:25 o1_mf_system_mncsk8kf_.dbf
After:
oracle@node1:~/ [rdbms192300_a] ls -ltrh /u02/app/oracle/oradata/TEST1/TEST1/datafile/ total 2.6G -rw-r----- 1 oracle asmadmin 5.1M Nov 26 16:32 o1_mf_users_mnct7fwn_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:18 o1_mf_undotbs1_mncslvbn_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:42 o1_mf_undotbs2_mnft7oh9_.dbf -rw-r----- 1 oracle asmadmin 1011M Nov 27 10:42 o1_mf_sysaux_mncslcxc_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 10:42 o1_mf_system_mncsk8kf_.dbf -rw-r----- 1 oracle asmadmin 284M Nov 27 10:42 o1_mf_temp_mncso2l7_.tmp
But this can be, of course, confirmed from the alert log file.
oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] grep -i UNDOTBS2 alert_TEST11.log create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED Completed: create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED ALTER SYSTEM SET undo_tablespace='UNDOTBS2' SCOPE=SPFILE SID='TEST12'; oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] vi alert_TEST11.log ... ... ... 2024-11-27T10:40:05.316493+01:00 ALTER SYSTEM SET undo_tablespace='UNDOTBS1' SCOPE=SPFILE SID='TEST11'; create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED 2024-11-27T10:40:06.594880+01:00 Completed: create undo tablespace UNDOTBS2 datafile size 102400K AUTOEXTEND ON MAXSIZE UNLIMITED 2024-11-27T10:40:06.726472+01:00 ALTER SYSTEM SET undo_tablespace='UNDOTBS2' SCOPE=SPFILE SID='TEST12'; 2024-11-27T10:40:06.735353+01:00 ALTER SYSTEM RESET undo_tablespace SCOPE=SPFILE SID='*';
In the alert log we can find the new redo log group creation for the additional instance.
oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] grep -i "alter database add logfile" alert_TEST11.log alter database add logfile thread 2 group 4 size 4294967296 Completed: alter database add logfile thread 2 group 4 size 4294967296 alter database add logfile thread 2 group 5 size 4294967296 Completed: alter database add logfile thread 2 group 5 size 4294967296 alter database add logfile thread 2 group 6 size 4294967296 Completed: alter database add logfile thread 2 group 6 size 4294967296 oracle@node0:/u01/app/odaorabase/oracle/diag/rdbms/test1/TEST11/trace/ [TEST11] vi alert_TEST11.log ... ... ... 2024-11-27T10:34:46.267851+01:00 CJQ0 started with pid=67, OS id=69666 alter database add logfile thread 2 group 4 size 4294967296 2024-11-27T10:34:49.541664+01:00 Completed: alter database add logfile thread 2 group 4 size 4294967296 alter database add logfile thread 2 group 5 size 4294967296 2024-11-27T10:34:52.100314+01:00 Completed: alter database add logfile thread 2 group 5 size 4294967296 alter database add logfile thread 2 group 6 size 4294967296 2024-11-27T10:34:55.037728+01:00 Completed: alter database add logfile thread 2 group 6 size 4294967296
Check that TEST1 database is now a RAC database
Let’s check that TEST1 database is now a RAC database.
The database is opened READ/WRITE.
oracle@node0:~/ [rdbms1900] TEST11 ****************************************************** INSTANCE_NAME : TEST11 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 10:45:40
We now have 2 instances for the databases, with name db_unique_name[1-2], one running on each node.
oracle@node0:~/ [TEST11] ps -ef | grep -i [p]mon | grep -i test1 oracle 92045 1 0 10:41 ? 00:00:00 ora_pmon_TEST11 oracle@node0:~/ [TEST11] oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test oracle 6721 1 0 10:41 ? 00:00:00 ora_pmon_TEST12 oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is running on node node0 Instance TEST12 is running on node node1
The grid infra configuration hast been updated with additional instance.
oracle@node0:~/ [TEST11] srvctl config database -d TEST1 Database unique name: TEST1 Database name: TEST1 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST1/dbs/spfileTEST1.ora Password file: /u02/app/oracle/oradata/TEST1//orapwTEST1 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: Mount point paths: /u01/app/odaorahome Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: dbaoper Database instances: TEST11,TEST12 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed oracle@node0:~/ [TEST11]
As we can see, we will need to fine tue “Password file” and “Mount point paths”.
I also checked that cluster_database parameter is now set to TRUE and for 2 instances. I also confirmed the datafile, logfile and tempfile are still the one expected in the respective ACFS file systems.
oracle@node0:~/ [TEST11] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 10:50:34 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database TRUE 2 cluster_database TRUE SQL> select name from v$datafile; NAME ------------------------------------------------------------------------------------------------------------------------ /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_system_mncsk8kf_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_undotbs2_mnft7oh9_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_sysaux_mncslcxc_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_undotbs1_mncslvbn_.dbf /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_users_mnct7fwn_.dbf SQL> select member from v$logfile; MEMBER ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_3_mncsngg8_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_2_mncsngdv_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_1_mncsngbc_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_4_mnfsxpww_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_5_mnfsxsls_.log /u04/app/oracle/redo/TEST1/onlinelog/o1_mf_6_mnfsxw39_.log 6 rows selected. SQL> select name from v$tempfile; NAME -------------------------------------------------------------------------------- /u02/app/oracle/oradata/TEST1/TEST1/datafile/o1_mf_temp_mncso2l7_.tmp SQL>
And the files are existing in the appropriate folder. All good!
oracle@node0:~/ [TEST11] ls -ltrh /u02/app/oracle/oradata/TEST1/TEST1/datafile/ total 2.6G -rw-r----- 1 oracle asmadmin 5.1M Nov 27 10:41 o1_mf_users_mnct7fwn_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:41 o1_mf_undotbs2_mnft7oh9_.dbf -rw-r----- 1 oracle asmadmin 284M Nov 27 10:52 o1_mf_temp_mncso2l7_.tmp -rw-r----- 1 oracle asmadmin 1.1G Nov 27 10:52 o1_mf_system_mncsk8kf_.dbf -rw-r----- 1 oracle asmadmin 101M Nov 27 10:52 o1_mf_undotbs1_mncslvbn_.dbf -rw-r----- 1 oracle asmadmin 1011M Nov 27 10:52 o1_mf_sysaux_mncslcxc_.dbf oracle@node0:~/ [TEST11] ls -ltrh /u04/app/oracle/redo/TEST1/onlinelog/ total 25G -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:34 o1_mf_5_mnfsxsls_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_2_mncsngdv_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_3_mncsngg8_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_4_mnfsxpww_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:41 o1_mf_6_mnfsxw39_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 10:52 o1_mf_1_mncsngbc_.log oracle@node0:~/ [TEST11] oracle@node0:~/ [TEST11] ls -ltrh /u04/app/oracle/redo/TEST1/controlfile/ total 11M -rw-r----- 1 oracle asmadmin 11M Nov 27 11:23 o1_mf_mncsnfst_.ctl
Check TEST2 database
I also checked and ensure that nothing was modified for the other TEST2 database running in the same oracle dbhome.
oracle@node0:~/ [TEST11] TEST2 ****************************************************** INSTANCE_NAME : TEST2 DB_NAME : TEST2 DB_UNIQUE_NAME : TEST2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 11:18:49 oracle@node0:~/ [TEST2] ps -ef | grep -i [p]mon | grep -i test2 oracle 89478 1 0 Nov26 ? 00:00:04 ora_pmon_TEST2 oracle@node0:~/ [TEST2] oracle@node1:~/ [rdbms192300_a] ps -ef | grep -i [p]mon | grep -i test2 oracle@node1:~/ [rdbms192300_a] oracle@node0:~/ [TEST2] srvctl status database -d TEST2 Instance TEST2 is running on node node0 oracle@node0:~/ [TEST2] oracle@node0:~/ [TEST2] srvctl config database -d TEST2 Database unique name: TEST2 Database name: TEST2 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST2/dbs/spfileTEST2.ora Password file: /u02/app/oracle/oradata/TEST2/dbs/orapwTEST2 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: DATA Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST2,/u03/app/oracle/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: SINGLE OSDBA group: dba OSOPER group: dbaoper Database instance: TEST2 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed oracle@node0:~/ [TEST2] oracle@node0:~/ [TEST2] ls -ltrh /u02/app/oracle/oradata/TEST2/TEST2/datafile/ total 2.5G -rw-r----- 1 oracle asmadmin 5.1M Nov 26 16:50 o1_mf_users_mncv7rhb_.dbf -rw-r----- 1 oracle asmadmin 252M Nov 26 22:06 o1_mf_temp_mnctonly_.tmp -rw-r----- 1 oracle asmadmin 96M Nov 27 11:20 o1_mf_undotbs1_mnctmgrj_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 11:20 o1_mf_system_mnctkw20_.dbf -rw-r----- 1 oracle asmadmin 1.1G Nov 27 11:20 o1_mf_sysaux_mnctlzhd_.dbf oracle@node0:~/ [TEST2] ls -ltrh /u04/app/oracle/redo/TEST2/onlinelog/ total 13G -rw-r----- 1 oracle asmadmin 4.1G Nov 26 16:44 o1_mf_2_mncto2qz_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 26 16:44 o1_mf_3_mncto2s1_.log -rw-r----- 1 oracle asmadmin 4.1G Nov 27 11:21 o1_mf_1_mncto2nv_.log oracle@node0:~/ [TEST2] ls -ltrh /u04/app/oracle/redo/TEST2/controlfile/ total 11M -rw-r----- 1 oracle asmadmin 11M Nov 27 11:21 o1_mf_mncto22y_.ctl oracle@node0:~/ [TEST2] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 27 11:21:34 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2023, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.23.0.0.0 SQL> select inst_id, name, value from gv$parameter where name='cluster_database'; INST_ID NAME VALUE ---------- -------------------- -------------------- 1 cluster_database FALSE SQL> select name from v$datafile; NAME ------------------------------------------------------------------------------------------------------------------------ /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_system_mnctkw20_.dbf /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_sysaux_mnctlzhd_.dbf /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_undotbs1_mnctmgrj_.dbf /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_users_mncv7rhb_.dbf SQL> select name from v$tempfile; NAME ------------------------------------------------------------------------------------------------------------------------ /u02/app/oracle/oradata/TEST2/TEST2/datafile/o1_mf_temp_mnctonly_.tmp SQL> select member from v$logfile; MEMBER ------------------------------------------------------------------------------------------------------------------------ /u04/app/oracle/redo/TEST2/onlinelog/o1_mf_3_mncto2s1_.log /u04/app/oracle/redo/TEST2/onlinelog/o1_mf_2_mncto2qz_.log /u04/app/oracle/redo/TEST2/onlinelog/o1_mf_1_mncto2nv_.log
Change grid infra password file for converted TEST1 database
As we could see in the checks, the grid infra password file is using a password file coming from the conversion, and not stored in the appropriate subdirectory.
The conversion has created new password file.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] ls -ltrh total 224K drwx------ 2 root root 64K Nov 26 16:14 lost+found drwxr-x--- 3 oracle asmadmin 20K Nov 26 16:15 TEST1 drwxrwx--- 2 oracle oinstall 20K Nov 26 16:26 arc10 drwxr-x--- 2 oracle oinstall 20K Nov 26 16:27 dbs -rw-r----- 1 oracle oinstall 2.0K Nov 27 10:41 orapwTEST1
Let’s confirm previous password file still exists.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] ls -ltrh /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 -rw-r----- 1 oracle asmadmin 2.0K Nov 26 16:17 /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1
I then updated the value in the grid infra.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Password Password file: /u02/app/oracle/oradata/TEST1//orapwTEST1 oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl modify database -d TEST1 -pwfile /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Password Password file: /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1
Change grid infra mount point paths for converted TEST1 database
All ACFS databases mount paths should be added to the grid infra again for dependencies.
Here are the list of the ACFS File System database will be using.
oracle@node1:~/ [rdbms192300_a] df -h /u01/app/odaorahome Filesystem Size Used Avail Use% Mounted on /dev/asm/orahome_sh-446 80G 30G 51G 37% /u01/app/odaorahome oracle@node1:~/ [rdbms192300_a] df -h /u02/app/oracle/oradata/TEST1 Filesystem Size Used Avail Use% Mounted on /dev/asm/dattest1-446 100G 3.0G 98G 3% /u02/app/oracle/oradata/TEST1 oracle@node1:~/ [rdbms192300_a] df -h /u03/app/oracle/ Filesystem Size Used Avail Use% Mounted on /dev/asm/reco-348 7.5T 18G 7.5T 1% /u03/app/oracle oracle@node1:~/ [rdbms192300_a] df -h /u04/app/oracle/redo Filesystem Size Used Avail Use% Mounted on /dev/asm/redo-195 240G 93G 148G 39% /u04/app/oracle/redo oracle@node1:~/ [rdbms192300_a] df -h /u01/app/odaorabase0 Filesystem Size Used Avail Use% Mounted on /dev/asm/odabase_n0-446 100G 11G 90G 11% /u01/app/odaorabase0 oracle@node1:~/ [rdbms192300_a] df -h /u01/app/odaorabase1 Filesystem Size Used Avail Use% Mounted on /dev/asm/odabase_n1-446 100G 3.4G 97G 4% /u01/app/odaorabase1
I then updated grid infra mount point paths value.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Mount Mount point paths: /u01/app/odaorahome oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl modify database -d TEST1 -acfspath "/u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle,/u04/app/oracle/redo,/u01/app/odaorabase0,/u01/app/odaorabase1" oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 | grep -i Mount Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle/,/u04/app/oracle/redo/,/u01/app/odaorabase0,/u01/app/odaorabase1
Final check and database restart
I did some final grid infra configuration checks.
oracle@node0:/u02/app/oracle/oradata/TEST1/ [TEST11] srvctl config database -d TEST1 Database unique name: TEST1 Database name: TEST1 Oracle home: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 Oracle user: oracle Spfile: /u02/app/oracle/oradata/TEST1/dbs/spfileTEST1.ora Password file: /u02/app/oracle/oradata/TEST1/dbs/orapwTEST1 Domain: swisslos.local Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: Mount point paths: /u01/app/odaorahome,/u02/app/oracle/oradata/TEST1,/u03/app/oracle/,/u04/app/oracle/redo/,/u01/app/odaorabase0,/u01/app/odaorabase1 Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: dbaoper Database instances: TEST11,TEST12 Configured nodes: node0,node1 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed
And I tested the database restart with oracle restart.
oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is running on node node0 Instance TEST12 is running on node node1 oracle@node0:~/ [TEST11] srvctl stop database -d TEST1 oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is not running on node node0 Instance TEST12 is not running on node node1 oracle@node0:~/ [TEST11] ps -ef | grep -i [p]mon | grep -i test1 oracle@node0:~/ [TEST11] oracle@node1:~/ [TEST12] ps -ef | grep -i [p]mon | grep -i test oracle@node1:~/ [TEST12] oracle@node0:~/ [TEST11] TEST11 ************************* INSTANCE_NAME : TEST11 STATUS : DOWN ************************* Statustime: 2024-11-27 11:15:51 oracle@node1:~/ [TEST12] TEST12 ************************* INSTANCE_NAME : TEST12 STATUS : DOWN ************************* Statustime: 2024-11-27 11:16:01 oracle@node0:~/ [TEST11] srvctl start database -d TEST1 oracle@node0:~/ [TEST11] srvctl status database -d TEST1 Instance TEST11 is running on node node0 Instance TEST12 is running on node node1 oracle@node0:~/ [TEST11] ps -ef | grep -i [p]mon | grep -i test1 oracle 60785 1 0 11:16 ? 00:00:00 ora_pmon_TEST11 oracle@node0:~/ [TEST11] oracle@node1:~/ [TEST12] ps -ef | grep -i [p]mon | grep -i test oracle 83531 1 0 11:16 ? 00:00:00 ora_pmon_TEST12 oracle@node1:~/ [TEST12] oracle@node0:~/ [TEST11] TEST11 ****************************************************** INSTANCE_NAME : TEST11 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 11:17:48 oracle@node1:~/ [TEST12] TEST12 ****************************************************** INSTANCE_NAME : TEST12 DB_NAME : TEST1 DB_UNIQUE_NAME : TEST1 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : Normal: 0/0, Oracle-maintained: 2/6 DATABASE_ROLE : PRIMARY FLASHBACK_ON : NO FORCE_LOGGING : YES VERSION : 19.23.0.0.0 NLS_LANG : AMERICAN_AMERICA.UTF8 CDB_ENABLED : NO ****************************************************** Statustime: 2024-11-27 11:17:59
Update ODA metadata
I checked ODA metadata and could see that database is still referred as a SI database.
[root@node0 ~]# odacli list-dbhomes ID Name DB Version DB Edition Home Location Status ---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ---------- 03e59f95-e77f-4429-a9fc-466bea89545b OraDB19000_home4 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 OraDB19000_home5 19.23.0.0.240416 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5 CONFIGURED [root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 4358287c-9cf0-45d4-a7e3-a59f933e86b2 TEST1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 110f26e7-f9f3-412e-9443-a201d24201a0 TEST2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 [root@node0 ~]#
I updated ODA metadata.
[root@node0 ~]# odacli update-registry -n db Job details ---------------------------------------------------------------- ID: 650c800a-823c-4e57-afcc-3a6530eb402c Description: Discover Components : db Status: Created Created: November 27, 2024 11:26:39 AM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]# odacli describe-job -i 650c800a-823c-4e57-afcc-3a6530eb402c Job details ---------------------------------------------------------------- ID: 650c800a-823c-4e57-afcc-3a6530eb402c Description: Discover Components : db Status: Success Created: November 27, 2024 11:26:39 AM CET Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Discover DBHome node0 November 27, 2024 11:26:56 AM CET November 27, 2024 11:27:02 AM CET Success Discover DBHome node0 November 27, 2024 11:27:02 AM CET November 27, 2024 11:27:07 AM CET Success Discover DB: DAWHT node1 November 27, 2024 11:27:07 AM CET November 27, 2024 11:27:19 AM CET Success Discover DB: DBBI1 node0 November 27, 2024 11:27:19 AM CET November 27, 2024 11:27:32 AM CET Success Discover DB: DBBI2 node0 November 27, 2024 11:27:32 AM CET November 27, 2024 11:27:45 AM CET Success Discover DB: DBBI3 node0 November 27, 2024 11:27:45 AM CET November 27, 2024 11:27:58 AM CET Success Discover DB: DBBI4 node1 November 27, 2024 11:27:58 AM CET November 27, 2024 11:28:10 AM CET Success Discover DB: DBBI6 node0 November 27, 2024 11:28:10 AM CET November 27, 2024 11:28:26 AM CET Success Discover DB: DBGI1 node0 November 27, 2024 11:28:26 AM CET November 27, 2024 11:28:38 AM CET Success Discover DB: DBGI2 node0 November 27, 2024 11:28:38 AM CET November 27, 2024 11:28:50 AM CET Success Discover DB: DBGI3 node0 November 27, 2024 11:28:50 AM CET November 27, 2024 11:29:02 AM CET Success Discover DB: DBGI4 node1 November 27, 2024 11:29:02 AM CET November 27, 2024 11:29:14 AM CET Success Discover DB: DBGI6 node0 November 27, 2024 11:29:14 AM CET November 27, 2024 11:29:30 AM CET Success Discover DB: DBSI1 node0 November 27, 2024 11:29:30 AM CET November 27, 2024 11:29:42 AM CET Success Discover DB: DBSI2 node0 November 27, 2024 11:29:42 AM CET November 27, 2024 11:29:54 AM CET Success Discover DB: DBSI3 node0 November 27, 2024 11:29:54 AM CET November 27, 2024 11:30:06 AM CET Success Discover DB: DBSI4 node1 November 27, 2024 11:30:07 AM CET November 27, 2024 11:30:18 AM CET Success Discover DB: DBSI5 node1 November 27, 2024 11:30:19 AM CET November 27, 2024 11:30:30 AM CET Success Discover DB: DBSI6 node0 November 27, 2024 11:30:30 AM CET November 27, 2024 11:30:43 AM CET Success Discover DB: RCDB node0 November 27, 2024 11:30:43 AM CET November 27, 2024 11:31:01 AM CET Success Discover DB: TEST1 node0 November 27, 2024 11:31:01 AM CET November 27, 2024 11:31:15 AM CET Success Discover DB: TEST2 node0 November 27, 2024 11:31:15 AM CET November 27, 2024 11:31:29 AM CET Success [root@node0 ~]#
I could confirm that all is good now. TEST1 database is seen as a RAC database.
[root@node0 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID ---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ---------------------------------------- 712d542e-ded7-4d1a-9b9d-7c335042ffc0 DAWHT SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b cd183219-3daa-4154-b4a4-41b92d4f8155 DBBI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 7391380b-f609-4457-be6b-bd9afa51148c DBBI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 95420f39-db33-4c4d-8d85-a5f8d42945e6 DBBI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b b7c42ea7-6eab-4b98-8ea7-8dd4ce9517a1 DBBI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2bbd4391-5eed-4878-b2e5-3670587527f6 DBBI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 16a24733-cfba-4e75-a9ce-59b3779dc82e DBGI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b f540b5d1-c074-457a-85e2-d35240541efd DBGI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 0393d997-50aa-4511-b5b9-c4ff2da393db DBGI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 707251cc-f19a-4b8c-89cc-63477c5747d0 DBGI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ce4350ed-e291-4815-8c43-3c6716d6402f DBGI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 28026894-0c2d-417b-b11a-d76516805247 DBSI1 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b fdfe3197-223f-4660-a834-4736f50110ef DBSI2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b ec54945d-d0de-4b92-8822-2bd0d31fe653 DBSI3 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 11a12489-2483-4f8a-bb60-7145417181a1 DBSI4 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 274e6069-b174-43fb-8625-70e1e333f160 DBSI5 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 2d824a9f-735a-4e8d-b6c8-5393ddc894e9 DBSI6 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 52eadf14-4d20-4910-91ca-a335361d53b2 RCDB SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 03e59f95-e77f-4429-a9fc-466bea89545b 4358287c-9cf0-45d4-a7e3-a59f933e86b2 TEST1 RAC 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 110f26e7-f9f3-412e-9443-a201d24201a0 TEST2 SI 19.23.0.0.240416 false OLTP EE odb1 ACFS CONFIGURED 1c6059a2-f6c7-4bca-a07a-8efc0757ed08 [root@node0 ~]#
To wrap up
As we can see it is quite easy to convert a Single Instance database to a RAC database on an ODA. There is few rconfig template xml parameters that need some attention to ensure there will be no dataloss. I would strongly recommend to test it first on a TEST database, created just for this test purpose.