Virtualisation is now possible on the ODA whatever it is an ODA Lite or HA. The virtualisation is based on KVM and integrated with odacli for Bare Metal installation. Virtualisation has got the advantage to allocate resources to databases and applications on the same physical server. But what is the High Availability solution I have for virtualization on the ODA?
Virtualisation solutions
There is 2 kind of virtualisation.
Virtual Machines
Virtual Machines is also called Compute Instances.
- These VM are mainly used for application
- They support various OS like linux, windows, solaris, …
In order to be able to create a Compute Instance, we need :
- A CPU Pool
- A VM Storage
- Virtual Network
- Virtual Disks
VM Storage:
- Central location storing resources essential for creating and managing VM as ISO files, VM configuration files, Virtual Disks
- Created on the ACFS it can be easily relocate in case the physical server, where the VM is running, fails
Virtual Network:
- Bridged or a bridged-vlan
- Attached on one of the bonding interface
- Additional bridged network can be created on interfaces except the public interface
- Additional bridged-vlan network can be created on any interface including the public interface
- Bridge with or with IP assigned
- For more information, see my other blog on this subject https://blog.dbi-services.com/creating-kvm-database-system-on-separate-vlan-network-on-oda/
A Virtual Disks is:
- Created in VM storage
- Provides additional storage to the VM
- Can be attached and detach later on
- Expanding VM file system storage or creating a new file system
A Compute Instance can be started, stopped, cloned and restarted independently.
Auto start can be enabled or disabled whatever ODA we are using (Single-node or HA ODA models), see below :
[root@dbioda02 ~]# odacli describe-vm -n vmdoag | grep -i 'Auto Start' Auto start: YES [root@dbioda02 ~]# odacli modify-vm -h | grep -i 'autostart' --autostart,-as Enables autostart for the VM --no-autostart,-no-as Disables autostart for the VM [root@dbioda02 ~]# odacli modify-vm -n vmdoag -no-as Job details ---------------------------------------------------------------- ID: 8cce5d5a-3f84-4140-a3eb-13646db774bc Description: VM vmdoag modification Status: Created Created: December 20, 2022 4:07:37 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- [root@dbioda02 ~]# odacli describe-job -i 8cce5d5a-3f84-4140-a3eb-13646db774bc Job details ---------------------------------------------------------------- ID: 8cce5d5a-3f84-4140-a3eb-13646db774bc Description: VM vmdoag modification Status: Success Created: December 20, 2022 4:07:37 PM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Validate dependency resources December 20, 2022 4:07:37 PM CET December 20, 2022 4:07:38 PM CET Success Define VM locally December 20, 2022 4:07:38 PM CET December 20, 2022 4:07:38 PM CET Success Edit VM CRS Configuration December 20, 2022 4:07:38 PM CET December 20, 2022 4:07:38 PM CET Success Save domain in ACFS December 20, 2022 4:07:38 PM CET December 20, 2022 4:07:38 PM CET Success Define VM globally December 20, 2022 4:07:38 PM CET December 20, 2022 4:07:38 PM CET Success Modify VM metadata December 20, 2022 4:07:38 PM CET December 20, 2022 4:07:38 PM CET Success Persist metadata December 20, 2022 4:07:38 PM CET December 20, 2022 4:07:39 PM CET Success [root@dbioda02 ~]# odacli describe-vm -n vmdoag | grep -i 'Auto Start' Auto start: NO
DB Systems
- These VM are dedicated to host database
- Hard partitioning is possible
- It is like an odacli integrated in the Bare Metal itself
- It will permit network segregation and OS segregation
- They are mainly used for licensing reason in Enterprise Edition. CPU core of the ODA will not be reduced. Shared DB System CPU pool will be created according to the number of cores being licensed, and DB System will be run on those CPU Pool.
For DB Systems there is a few requirement that needs to be understood:
- 200 GB per DB System will be needed on the DATA Group to create the virtual host
- We can only host 1 database per DB System
- Database created in the DB System can only run ASM and not ACFS
- In version 19.X it possible to create CDB or non CDB database
- With current ODA 19.17 version it is possible to create database in DB System in versions 21.8, 21.7, 21.6, 21.5, 21.4, 19.17, 19.16, 19.15, 19.14, and 19.13.
On the other hand the DB System will required a more complex installation, will use more resources (Memory and CPU) and add more maintenance work (during ODA patching we will need to additionally patch all DB Systems).
High Availability with Compute Instance
HA (High Availability) is only possible with ODA HA models (2 nodes with ACFS Storage).
By default VM are created with autostart and failover enabled for HA models:
[root@dbioda02 ~]# odacli describe-vm -n vmdoag | grep -i 'Auto Start\|Fail over' Auto start: YES Fail over: YES
It is of course possible to enable or disable failover option.
What is happening in case the physical node, where the VM is running, is crashing ?
There will be one attempt to restart the VM once before failing over to the different node on HA models.
Relocate VM
We can easily relocate the VM on the other node. Currently my VM is running on node 0, named, dbioda02.
[root@dbioda02 ~]# odacli describe-vm -n vmdoag | grep -i node Current node: dbioda02 Preferred node: dbioda02
As we can see, we can set a preferred node. This preferred node can be easily change.
[root@dbioda02 ~]# odacli modify-vm -h | grep -i pref-node --pref-node,-pn Defines the preferred node to run the VM, use
To be able to relocate the VM on the other node the graphical settings should be put on 127.0.0.1 if not already done, otherwise the command will failed with following error :
internal error: qemu unexpectedly closed the monitor: 2022-09-12T21:05:45.531641Z qemu-system-x86_64: -vnc 10.36.0.232:0: Failed to bind socket: Cannot assign requested address
[root@dbioda02 ~]# odacli modify-vm -n vmdoag -g "vnc,listen=127.0.0.1" Job details ---------------------------------------------------------------- ID: 74028fc9-d680-4565-ba94-cce0d0bb5677 Description: VM vmdoag modification Status: Created Created: December 21, 2022 8:35:50 AM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ----------
To relocate the VM on the other node, either we can stop the VM and start it on the other node :
odacli start-vm -n vmdoag -nn dbioda03
Or use the migrate command :
odacli migrate-vm -n vmdoag -to dbioda03
Currently my VMs is running on my node 0, dbioda02, my node 1 being dbioda03. Pay attention on which nodes, the KVM processes are running.
[root@dbioda02 ~]# ps -ef | grep -i [k]vm qemu 36043 1 0 Dec20 ? 00:08:42 /usr/bin/qemu-system-x86_64 -name guest=vmdoag,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-vmdoag/master-key.aes -enable-fips -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX-IBRS -m 6144 -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid d71ecbc9-30f6-43c6-b6ea-7649fffa60b6 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=36,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -blockdev {"driver":"file","filename":"/u05/app/sharedrepo/vmsdoag/.ACFS/snaps/vm_vmdoag/vmdoag","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null} -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=libvirt-2-format,id=virtio-disk0,bootindex=1 -device ide-cd,bus=ide.0,unit=0,id=ide0-0-0 -netdev tap,fd=38,id=hostnet0,vhost=on,vhostfd=39 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:2a:60,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.36.0.232:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 36047 2 0 Dec20 ? 00:00:00 [kvm-nx-lpage-re] root 36053 2 0 Dec20 ? 00:00:00 [kvm-pit/36043] [root@dbioda03 ~]# ps -ef | grep -i [k]vm [root@dbioda03 ~]#
Let’s relocate the VM on dbioda03 node.
[root@dbioda02 ~]# odacli migrate-vm -n vmdoag -to dbioda03 Job details ---------------------------------------------------------------- ID: ab5c7fd2-f21b-4f0d-af70-34747e333e75 Description: VM vmdoag migration Status: Created Created: December 21, 2022 8:49:08 AM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- [root@dbioda02 ~]# odacli describe-job -i ab5c7fd2-f21b-4f0d-af70-34747e333e75 Job details ---------------------------------------------------------------- ID: ab5c7fd2-f21b-4f0d-af70-34747e333e75 Description: VM vmdoag migration Status: Success Created: December 21, 2022 8:49:08 AM CET Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Validate migrate prerequisites December 21, 2022 8:49:08 AM CET December 21, 2022 8:49:08 AM CET Success Migrate VM December 21, 2022 8:49:08 AM CET December 21, 2022 8:49:26 AM CET Success
Let’s check the details for the VM to confirm the current node information.
[root@dbioda02 ~]# odacli describe-vm -n vmdoag | grep -i node Display Port: Check node dbioda03 Current node: dbioda03 Preferred node: dbioda02
The current node is now dbioda03.
Let’s check that the KVM processes are really running on dbiod03 node.
[root@dbioda02 ~]# ps -ef | grep -i [k]vm [root@dbioda02 ~]# [root@dbioda03 ~]# ps -ef | grep -i [k]vm qemu 76320 1 4 08:49 ? 00:00:54 /usr/bin/qemu-system-x86_64 -name guest=vmdoag,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-vmdoag/master-key.aes -enable-fips -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX-IBRS -m 6144 -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid d71ecbc9-30f6-43c6-b6ea-7649fffa60b6 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=36,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -blockdev {"driver":"file","filename":"/u05/app/sharedrepo/vmsdoag/.ACFS/snaps/vm_vmdoag/vmdoag","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null} -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=libvirt-2-format,id=virtio-disk0,bootindex=1 -device ide-cd,bus=ide.0,unit=0,id=ide0-0-0 -netdev tap,fd=38,id=hostnet0,vhost=on,vhostfd=39 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:2a:60,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 76336 2 0 08:49 ? 00:00:00 [kvm-nx-lpage-re] root 76342 2 0 08:49 ? 00:00:00 [kvm-pit/76320]
Let’s crash dbiod03 node and see if a failover of the VM is automatically done on the dbioda02 node.
[root@dbioda03 ~]# systemctl poweroff Connection to 10.36.0.233 closed by remote host. Connection to 10.36.0.233 closed.
We can see that the KVM processes have been automatically started on dbioda02 node.
[root@dbioda02 ~]# ps -ef | grep -i [k]vm qemu 19412 1 18 09:12 ? 00:00:46 /usr/bin/qemu-system-x86_64 -name guest=vmdoag,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-vmdoag/master-key.aes -enable-fips -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX-IBRS -m 6144 -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid d71ecbc9-30f6-43c6-b6ea-7649fffa60b6 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=36,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -blockdev {"driver":"file","filename":"/u05/app/sharedrepo/vmsdoag/.ACFS/snaps/vm_vmdoag/vmdoag","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null} -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=libvirt-2-format,id=virtio-disk0,bootindex=1 -device ide-cd,bus=ide.0,unit=0,id=ide0-0-0 -netdev tap,fd=38,id=hostnet0,vhost=on,vhostfd=39 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:2a:60,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on root 19427 2 0 09:12 ? 00:00:00 [kvm-nx-lpage-re] root 19433 2 0 09:12 ? 00:00:00 [kvm-pit/19412]
Following automatic failover, and due to the fact dbioda03 node is still power off, the VM description is showing current node as N/A.
[root@dbioda02 ~]# odacli describe-vm -n vmdoag | grep -i node Current node: N/A Preferred node: dbioda02
Once dbioda03 node is up and running again, the output is showing correctly dbioda02 node as current node.
[root@dbioda02 ~]# odacli describe-vm -n vmdoag | grep -i node Current node: dbioda02 Preferred node: dbioda02
High Availability with DB Systems
With DB Systems we can either create a single-instance database or a RAC database with 2 instances. We will the benefit here from the RAC advantages for DB Systems High Availability.
Uwe
08.04.2025Thanks Marc
just one question regarding Virtual databasing on ODA.
Is there still one ASM Database or can you also have a separated ASM per VM ?
I only read about one ASM per Server ....
thanks in advance
Uwe
Marc Wagner
09.04.2025Hi Uwe,
You are correct. The grid infra software will be installed in each DB System, but you will still have only one ASM database for the whole ODA. Each DB System will be linked to it.
What changed with DB System is that in the past you could only have one Oracle database per DB System. Now you can have multiple Oracle databases per DB System. You will be able to run odacli create-database inside the DB System to create additional Oracle databases.
I hope this answer your questions
Regards,
Marc