Introduction

Oracle Database Appliance now includes a powerful virtualisation layer based on KVM and fully integrated with odacli. For sure, an ODA is mostly dedicated to Oracle databases, but you can also benefit from unused cores to run application VMs in the same box as your databases. And it works like a charm.

One of my customer recently asked if it would be possible to restore a part of the VM data without interrupting the service, the goal being to browse VM data at a point in time and compare to what’s in the current filesystem. This is not for a full VM restore but for getting back files from an older version.

Creating a test environment

On my ODA X8-2M running 19.19 version, let’s create a CPU pool for the VMs, create the VM storage on DATA diskgroup and create a data vdisk. This vdisk will be the one I’ll use for data recovery:

odacli list-vms
No data found for VM

odacli create-cpupool -n CpuPool4VMs -c 4 -vm
sleep 30; odacli list-cpupools
Name                  Type                Configured on              Cores  Associated resources            Created                   Updated      
--------------------  ------------------  -------------------------  -----  ------------------------------  ------------------------  ------------------------
CpuPool4VMs           VM                  dbioda01                   4      NONE                            2023-10-05 11:24:07 CEST  2023-10-05 11:24:07 CEST

odacli create-vmstorage -n VMsDATA -dg DATA -s 500G
sleep 30; odacli list-vmstorages
Name                  Disk group       Volume name      Volume device                   Size        Used        Used %      Available   Mount Point                          Created                   Updated
--------------------  ---------------  ---------------  ------------------------------  ----------  ----------  ----------  ----------  -----------------------------------  ------------------------  ------------------------
VMsDATA               DATA             VMSDATA          /dev/asm/vmsdata-214            500.00 GB   1.36 GB     0.27%       498.64 GB   /u05/app/sharedrepo/vmsdata          2023-10-05 11:27:58 CEST  2023-10-05 11:27:58 CEST

                                                                                                                                         
odacli create-vdisk -n dbivm1-data -vms VMsDATA -s 100G

sleep 120 ;  odacli describe-job -i 1f438468-0835-4c8f-8a0d-beb99416524f

Job details
----------------------------------------------------------------
                     ID:  1f438468-0835-4c8f-8a0d-beb99416524f
            Description:  VDisk dbivm1-data creation
                 Status:  Success
                Created:  October 5, 2023 11:31:14 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Disk doesn't exist      October 5, 2023 11:31:14 AM CEST    October 5, 2023 11:31:14 AM CEST    Success
Validate Vm Storage exists               October 5, 2023 11:31:14 AM CEST    October 5, 2023 11:31:14 AM CEST    Success
Validate Vm Storage space                October 5, 2023 11:31:14 AM CEST    October 5, 2023 11:31:14 AM CEST    Success
Create Virtual Disk snapshot             October 5, 2023 11:31:14 AM CEST    October 5, 2023 11:31:14 AM CEST    Success
Create Virtual Disk directories          October 5, 2023 11:31:14 AM CEST    October 5, 2023 11:31:14 AM CEST    Success
Create Virtual Disk                      October 5, 2023 11:31:14 AM CEST    October 5, 2023 11:33:22 AM CEST    Success
Create metadata                          October 5, 2023 11:33:22 AM CEST    October 5, 2023 11:33:22 AM CEST    Success
Persist metadata                         October 5, 2023 11:33:22 AM CEST    October 5, 2023 11:33:22 AM CEST    Success


odacli list-vdisks
Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 11:33:22 CEST  2023-10-05 11:33:22 CEST

Now it’s time to create the dbivm1 virtual machine. It will have 32GB of RAM and an Oracle Linux 7.9 distribution for the first boot. I will also connect my data vdisk I created before and this VM will use 2 CPU cores from the CPU pool. This VM will be connected to the pubnet network, it will have a 50GB boot disk and it will listen on my ODA’s IP for GUI setup.

odacli create-vm -n dbivm1 -m 32G -src /opt/dbi/V1009690-01.iso -vd dbivm1-data -vc 2 -cp CpuPool4VMs -vn pubnet -vms VMsDATA -s 50G -g "vnc,listen=10.86.20.241"

odacli describe-job -i bcc6662a-7a0b-4568-b853-9da86384ca13

Job details
----------------------------------------------------------------
                     ID:  bcc6662a-7a0b-4568-b853-9da86384ca13
            Description:  VM dbivm1 creation
                 Status:  Success
                Created:  October 5, 2023 1:41:22 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate dependency resources            October 5, 2023 1:41:22 PM CEST     October 5, 2023 1:41:22 PM CEST     Success
Validate resource allocations            October 5, 2023 1:41:22 PM CEST     October 5, 2023 1:41:23 PM CEST     Success
Allocate resources                       October 5, 2023 1:41:23 PM CEST     October 5, 2023 1:41:23 PM CEST     Success
Provision new VM                         October 5, 2023 1:41:23 PM CEST     October 5, 2023 1:41:25 PM CEST     Success
Add VM to Clusterware                    October 5, 2023 1:41:25 PM CEST     October 5, 2023 1:41:26 PM CEST     Success
Save domain in ACFS                      October 5, 2023 1:41:26 PM CEST     October 5, 2023 1:41:26 PM CEST     Success
Create VM metadata                       October 5, 2023 1:41:26 PM CEST     October 5, 2023 1:41:26 PM CEST     Success
Persist metadata                         October 5, 2023 1:41:26 PM CEST     October 5, 2023 1:41:26 PM CEST     Success


odacli list-vms
Name                  VM Storage            Current State    Target State     Created                   Updated
--------------------  --------------------  ---------------  ---------------  ------------------------  ------------------------
dbivm1                VMsDATA               ONLINE           ONLINE           2023-10-05 13:41:26 CEST  2023-10-05 13:41:26 CEST


odacli describe-vm -n dbivm1
VM details
--------------------------------------------------------------------------------
                       ID:  07a741e0-d356-4b5c-adb5-d5f3c8ae5d03
                     Name:  dbivm1
                  Created:  2023-10-05 13:41:26 CEST
                  Updated:  2023-10-05 13:41:26 CEST
               VM Storage:  VMsDATA
              Description:  NONE
            VM image path:  /u05/app/sharedrepo/vmsdata/.ACFS/snaps/vm_dbivm1/dbivm1
                  VM size:  50.00 GB
                   Source:  V1009690-01.iso
              Cloned from:  N/A
                  OS Type:  NONE
               OS Variant:  NONE
        Graphics settings:  vnc,listen=10.86.20.241
             Display Port:  10.86.20.241:0

 Status
--------------------------
             Current node:  dbioda01
            Current state:  ONLINE
             Target state:  ONLINE

 Parameters
--------------------------
           Preferred node:  NONE
              Boot option:  NONE
               Auto start:  YES
                Fail over:  NO
             NUMA enabled:  NO

                            Config                     Live
                            -------------------------  -------------------------
                   Memory:  32.00 GB                   32.00 GB
               Max Memory:  32.00 GB                   32.00 GB
               vCPU count:  2                          2
           Max vCPU count:  2                          2
                 CPU Pool:  CpuPool4VMs                CpuPool4VMs
        Effective CPU set:  1-2,5-6,9-10,13-14         1-2,5-6,9-10,13-14
                    vCPUs:  0:1-2,5-6,9-10,13-14       0:1-2,5-6,9-10,13-14
                            1:1-2,5-6,9-10,13-14       1:1-2,5-6,9-10,13-14
                   vDisks:  dbivm1-data:vdb            dbivm1-data:vdb
                vNetworks:  pubnet:52:54:00:1f:f4:84   pubnet:52:54:00:1f:f4:84

Display port is 10.86.20.241:0: I will then connect to this new VM using a VNC Viewer on 10.86.20.241:5900. Default port for VNC is 5900, just add the defined port here (5900+0). I will then do the setup of the operating system: everything with default values appart from network settings (10.86.20.248/24). I will only use the 50GB boot disk for Linux setup, I will configure the data disk later.

Once the OS is deployed, let’s configure the 100GB data disk:

ssh [email protected]
fdisk -l /dev/vdb | grep GB
Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors

pvcreate /dev/vdb
vgcreate vg_data /dev/vdb
lvcreate -L 80G -n lv_data vg_data
mkfs.ext4 /dev/mapper/vg_data-lv_data
mkdir /data01
echo "/dev/mapper/vg_data-lv_data /data01 ext4 defaults 1 2" >> /etc/fstab
mount -a

df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                      16G     0   16G   0% /dev
tmpfs                         16G     0   16G   0% /dev/shm
tmpfs                         16G  8.6M   16G   1% /run
tmpfs                         16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/ol-root           44G  1.4G   43G   4% /
/dev/vda1                   1014M  184M  831M  19% /boot
tmpfs                        3.2G     0  3.2G   0% /run/user/0
/dev/mapper/vg_data-lv_data   79G   57M   75G   1% /data01

My data disk is ready, let’s put dummy files inside:

rm -rf dummyfile_* ; for a in {0001..5000}; do head -c $RANDOM /dev/urandom > /data01/dummyfile_$a.dmf; done

df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                      16G     0   16G   0% /dev
tmpfs                         16G     0   16G   0% /dev/shm
tmpfs                         16G  8.6M   16G   1% /run
tmpfs                         16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/ol-root           44G  1.4G   43G   4% /
/dev/vda1                   1014M  184M  831M  19% /boot
tmpfs                        3.2G     0  3.2G   0% /run/user/0
/dev/mapper/vg_data-lv_data   79G  144M   75G   1% /data01

ls -lrt | head
total 89860
drwx------. 2 root root 16384 Oct  5 09:30 lost+found
-rw-r--r--. 1 root root 15781 Oct  5 10:03 dummyfile_0001.dmf
-rw-r--r--. 1 root root 14976 Oct  5 10:03 dummyfile_0002.dmf
-rw-r--r--. 1 root root 22209 Oct  5 10:03 dummyfile_0003.dmf
-rw-r--r--. 1 root root 20250 Oct  5 10:03 dummyfile_0005.dmf
-rw-r--r--. 1 root root 21634 Oct  5 10:03 dummyfile_0004.dmf
-rw-r--r--. 1 root root  2424 Oct  5 10:03 dummyfile_0006.dmf
-rw-r--r--. 1 root root 23033 Oct  5 10:03 dummyfile_0007.dmf
-rw-r--r--. 1 root root 11018 Oct  5 10:03 dummyfile_0008.dmf

Creating vdisk backups

Vdisks can be cloned. Cloning a vdisk is a kind of backup you may use for another VM for example. You will need to stop the source VM to make a clone of its vdisks:

odacli list-vdisks
Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 11:33:22 CEST  2023-10-05 11:33:22 CEST

odacli stop-vm -n dbivm1; sleep 30 ; odacli clone-vdisk -n dbivm1-data -cn dbivm1-data-`date +"%Y%m%d-%H%M"` ; sleep 30 ; odacli start-vm -n dbivm1

Cloning a vdisk is quite fast thanks to NVMe SSDs:

odacli describe-job -i f05139b2-92cc-4b65-a7df-987807bbb36f

Job details
----------------------------------------------------------------
                     ID:  f05139b2-92cc-4b65-a7df-987807bbb36f
            Description:  VDisk dbivm1-data cloning
                 Status:  Success
                Created:  October 5, 2023 4:11:56 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Disk exists             October 5, 2023 4:11:56 PM CEST     October 5, 2023 4:11:56 PM CEST     Success
Validate Virtual Disk is not attached    October 5, 2023 4:11:56 PM CEST     October 5, 2023 4:11:56 PM CEST     Success
to any running vms
Clone Virtual Disk                       October 5, 2023 4:11:56 PM CEST     October 5, 2023 4:11:56 PM CEST     Success
Persist metadata                         October 5, 2023 4:11:56 PM CEST     October 5, 2023 4:11:56 PM CEST     Success

I now have another vdisk not attached to any VM:

odacli list-vdisks
Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 11:33:22 CEST  2023-10-05 11:33:22 CEST
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 16:11:56 CEST  2023-10-05 16:11:56 CEST
-20231005-1611

I can create as many clones I want, for example once a day I will keep a week:

odacli stop-vm -n dbivm1; sleep 30 ; odacli clone-vdisk -n dbivm1-data -cn dbivm1-data-`date +"%Y%m%d-%H%M"` ; sleep 30 ; odacli start-vm -n dbivm1

odacli list-vdisks
Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 11:33:22 CEST  2023-10-05 11:33:22 CEST
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 16:11:56 CEST  2023-10-05 16:11:56 CEST
-20231005-1611
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 16:26:24 CEST  2023-10-05 16:26:24 CEST
-20231005-1626

These vdisks are visible from the ODA, under a hidden folder:

du -hs /u05/app/sharedrepo/vmsdata/.ACFS/snaps/*
101G    /u05/app/sharedrepo/vmsdata/.ACFS/snaps/vdisk_dbivm1-data
101G    /u05/app/sharedrepo/vmsdata/.ACFS/snaps/vdisk_dbivm1-data-20231005-1611
101G    /u05/app/sharedrepo/vmsdata/.ACFS/snaps/vdisk_dbivm1-data-20231005-1626
2.1G    /u05/app/sharedrepo/vmsdata/.ACFS/snaps/vm_dbivm1

Altering data on the filesystem

Now let’s simulate a loss of data on the data filesystem of my VM:

ssh [email protected]
cd  /data01
rm -rf dummyfile_0*.dmf
ls -lrt | head
total 71596
drwx------. 2 root root 16384 Oct  5 09:30 lost+found
-rw-r--r--. 1 root root 10930 Oct  5 10:03 dummyfile_1000.dmf
-rw-r--r--. 1 root root 11321 Oct  5 10:03 dummyfile_1001.dmf
-rw-r--r--. 1 root root 32045 Oct  5 10:03 dummyfile_1002.dmf
-rw-r--r--. 1 root root 27619 Oct  5 10:03 dummyfile_1003.dmf
-rw-r--r--. 1 root root  9086 Oct  5 10:03 dummyfile_1004.dmf
-rw-r--r--. 1 root root 31690 Oct  5 10:03 dummyfile_1005.dmf
-rw-r--r--. 1 root root 23343 Oct  5 10:03 dummyfile_1006.dmf
-rw-r--r--. 1 root root  3594 Oct  5 10:03 dummyfile_1007.dmf

OK, I lost data and I will need to compare and pick-up lost data from an old copy of my vdisk.

Attaching a vdisk to an existing VM

I will attach an old version of the vdisk to my VM. I need to specify the “live” option because I can’t reboot my VM now:

odacli modify-vm -n dbivm1 -avd dbivm1-data-20231005-1611 --live

odacli describe-job -i d1f15f1a-5499-486d-83fb-18b0225ede57

Job details
----------------------------------------------------------------
                     ID:  d1f15f1a-5499-486d-83fb-18b0225ede57
            Description:  VM dbivm1 modification
                 Status:  Success
                Created:  October 5, 2023 5:11:09 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate dependency resources            October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success
Define VM locally                        October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success
Validate vDisk attachment pre-reqs       October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success
Attach vDisks                            October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success
Edit VM CRS Configuration                October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success
Save domain in ACFS                      October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success
Modify VM metadata                       October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success
Persist metadata                         October 5, 2023 5:11:09 PM CEST     October 5, 2023 5:11:09 PM CEST     Success

Connecting this vdisk to the VM was quite easy.

Making data from vdisk available

Data from this vdisk is not yet available because it has the same UUID as source data vdisk currently in use. But a vgimportclone will solve this problem by changing the UUID:

mkdir /rescue01
vgimportclone -i -n vg_rescue /dev/vdc

vgdisplay
  --- Volume group ---
  VG Name               vg_data
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <100.00 GiB
  PE Size               4.00 MiB
  Total PE              25599
  Alloc PE / Size       20480 / 80.00 GiB
  Free  PE / Size       5119 / <20.00 GiB
  VG UUID               L0BHcA-0h03-d4kz-0eY3-xx1a-N6ZF-7DUIQP

  --- Volume group ---
  VG Name               ol
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <49.00 GiB
  PE Size               4.00 MiB
  Total PE              12543
  Alloc PE / Size       12543 / <49.00 GiB
  Free  PE / Size       0 / 0
  VG UUID               QPR9Tk-CqNq-GsZ1-1qVO-tcDH-bcfh-EuBRQl

  --- Volume group ---
  VG Name               vg_rescue
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <100.00 GiB
  PE Size               4.00 MiB
  Total PE              25599
  Alloc PE / Size       20480 / 80.00 GiB
  Free  PE / Size       5119 / <20.00 GiB
  VG UUID               NiWkOI-5dvL-f2Pf-z7Wa-bk9c-4TvL-JK09E8

 vgscan
  Reading volume groups from cache.
  Found volume group "vg_data" using metadata type lvm2
  Found volume group "ol" using metadata type lvm2
  Found volume group "vg_rescue" using metadata type lvm2

lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_data/lv_data
  LV Name                lv_data
  VG Name                vg_data
  LV UUID                GcfE3o-X5YB-CJF3-emBr-socY-yJIZ-Wiq5N0
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2023-10-05 09:30:45 -0400
  LV Status              available
  # open                 1
  LV Size                80.00 GiB
  Current LE             20480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:2

  --- Logical volume ---
  LV Path                /dev/ol/swap
  LV Name                swap
  VG Name                ol
  LV UUID                Iu1Xk6-2aAC-0ZDd-amQl-k2LD-f3kv-RLSFFW
  LV Write Access        read/write
  LV Creation host, time dhcp-10-36-0-229, 2023-10-05 08:01:24 -0400
  LV Status              available
  # open                 2
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:1

  --- Logical volume ---
  LV Path                /dev/ol/root
  LV Name                root
  VG Name                ol
  LV UUID                wWG6wE-cF9r-Li2x-BYU6-OhAJ-cZZa-dj0pcn
  LV Write Access        read/write
  LV Creation host, time dhcp-10-36-0-229, 2023-10-05 08:01:24 -0400
  LV Status              available
  # open                 1
  LV Size                <44.00 GiB
  Current LE             11263
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:0

  --- Logical volume ---
  LV Path                /dev/vg_rescue/lv_data
  LV Name                lv_data
  VG Name                vg_rescue
  LV UUID                GcfE3o-X5YB-CJF3-emBr-socY-yJIZ-Wiq5N0
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2023-10-05 09:30:45 -0400
  LV Status              NOT available
  LV Size                80.00 GiB
  Current LE             20480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

Let’s change the logical volume status to make it available, then mount the filesystem on /rescue01:

lvchange -a /dev/vg_rescue/lv_data

mount /dev/mapper/vg_rescue-lv_data /rescue01/

df -h

Filesystem                     Size  Used Avail Use% Mounted on
devtmpfs                        16G     0   16G   0% /dev
tmpfs                           16G     0   16G   0% /dev/shm
tmpfs                           16G  8.6M   16G   1% /run
tmpfs                           16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/ol-root             44G  1.4G   43G   4% /
/dev/vda1                     1014M  184M  831M  19% /boot
/dev/mapper/vg_data-lv_data     79G  127M   75G   1% /data01
tmpfs                          3.2G     0  3.2G   0% /run/user/0
/dev/mapper/vg_rescue-lv_data   79G  144M   75G   1% /rescue01

“Restored” data is now available and I can now compare with my live data vdisk.

Putting back files from old vdisk version

Let’s copy back the files missing on my /data01 filesystem:

cd  /data01
ls -lrt | head
total 71596
drwx------. 2 root root 16384 Oct  5 09:30 lost+found
-rw-r--r--. 1 root root 10930 Oct  5 10:03 dummyfile_1000.dmf
-rw-r--r--. 1 root root 11321 Oct  5 10:03 dummyfile_1001.dmf
-rw-r--r--. 1 root root 32045 Oct  5 10:03 dummyfile_1002.dmf
-rw-r--r--. 1 root root 27619 Oct  5 10:03 dummyfile_1003.dmf
-rw-r--r--. 1 root root  9086 Oct  5 10:03 dummyfile_1004.dmf
-rw-r--r--. 1 root root 31690 Oct  5 10:03 dummyfile_1005.dmf
-rw-r--r--. 1 root root 23343 Oct  5 10:03 dummyfile_1006.dmf
-rw-r--r--. 1 root root  3594 Oct  5 10:03 dummyfile_1007.dmf

cd /rescue01
ls -lrt | head
total 89860
drwx------. 2 root root 16384 Oct  5 09:30 lost+found
-rw-r--r--. 1 root root 15781 Oct  5 10:03 dummyfile_0001.dmf
-rw-r--r--. 1 root root 14976 Oct  5 10:03 dummyfile_0002.dmf
-rw-r--r--. 1 root root 22209 Oct  5 10:03 dummyfile_0003.dmf
-rw-r--r--. 1 root root 20250 Oct  5 10:03 dummyfile_0005.dmf
-rw-r--r--. 1 root root 21634 Oct  5 10:03 dummyfile_0004.dmf
-rw-r--r--. 1 root root  2424 Oct  5 10:03 dummyfile_0006.dmf
-rw-r--r--. 1 root root 23033 Oct  5 10:03 dummyfile_0007.dmf
-rw-r--r--. 1 root root 11018 Oct  5 10:03 dummyfile_0008.dmf

cp /rescue01/dummyfile_0*.dmf /data01/

cd  /data01
ls -lrt dummyfile_0*.dmf | head
-rw-r--r--. 1 root root 15781 Oct  5 11:36 dummyfile_0001.dmf
-rw-r--r--. 1 root root 14976 Oct  5 11:36 dummyfile_0002.dmf
-rw-r--r--. 1 root root 22209 Oct  5 11:36 dummyfile_0003.dmf
-rw-r--r--. 1 root root 21634 Oct  5 11:36 dummyfile_0004.dmf
-rw-r--r--. 1 root root 20250 Oct  5 11:36 dummyfile_0005.dmf
-rw-r--r--. 1 root root  2424 Oct  5 11:36 dummyfile_0006.dmf
-rw-r--r--. 1 root root 23033 Oct  5 11:36 dummyfile_0007.dmf
-rw-r--r--. 1 root root 11018 Oct  5 11:36 dummyfile_0008.dmf
-rw-r--r--. 1 root root 27026 Oct  5 11:36 dummyfile_0009.dmf
-rw-r--r--. 1 root root 17601 Oct  5 11:36 dummyfile_0010.dmf

Remove the restored vdisk

I don’t need the old data disk anymore on the VM:

umount /rescue01
rmdir /rescue01/

vgchange -a n vg_rescue

Back to the ODA, let’s remove the vdisk from the VM (with “live” option):

odacli modify-vm -n dbivm1 -dvd dbivm1-data-20231005-1611 --live

odacli describe-job -i 38d8a632-0a4d-46c4-88dd-f2abcd26835a

Job details
----------------------------------------------------------------
                     ID:  38d8a632-0a4d-46c4-88dd-f2abcd26835a
            Description:  VM dbivm1 modification
                 Status:  Success
                Created:  October 5, 2023 5:37:08 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate dependency resources            October 5, 2023 5:37:08 PM CEST     October 5, 2023 5:37:08 PM CEST     Success
Define VM locally                        October 5, 2023 5:37:08 PM CEST     October 5, 2023 5:37:08 PM CEST     Success
Validate vDisk detachment pre-reqs       October 5, 2023 5:37:08 PM CEST     October 5, 2023 5:37:08 PM CEST     Success
Detach vDisks                            October 5, 2023 5:37:08 PM CEST     October 5, 2023 5:37:09 PM CEST     Success
Edit VM CRS Configuration                October 5, 2023 5:37:09 PM CEST     October 5, 2023 5:37:09 PM CEST     Success
Save domain in ACFS                      October 5, 2023 5:37:09 PM CEST     October 5, 2023 5:37:09 PM CEST     Success
Modify VM metadata                       October 5, 2023 5:37:09 PM CEST     October 5, 2023 5:37:09 PM CEST     Success
Persist metadata                         October 5, 2023 5:37:09 PM CEST     October 5, 2023 5:37:09 PM CEST     Success

That’s it, my VM is now back to its original configuration with only one data vdisk.

Clean up the old vdisk clones

Removing a vdisk clone is quite easy:

odacli list-vdisks

Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 11:33:22 CEST  2023-10-05 11:33:22 CEST
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 16:11:56 CEST  2023-10-05 16:11:56 CEST
-20231005-1611
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 16:26:24 CEST  2023-10-05 16:26:24 CEST
-20231005-1626


odacli delete-vdisk -n dbivm1-data-20231005-1611

sleep 30 ; odacli list-vdisks
Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 11:33:22 CEST  2023-10-05 11:33:22 CEST
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 16:26:24 CEST  2023-10-05 16:26:24 CEST
-20231005-1626


odacli delete-vdisk -n dbivm1-data-20231005-1626

sleep 30 ; odacli list-vdisks

Name                  VM storage            Size        Shared      Sparse      Created                   Updated
--------------------  --------------------  ----------  ----------  ----------  ------------------------  ------------------------
dbivm1-data           VMsDATA               100.00 GB   NO          NO          2023-10-05 11:33:22 CEST  2023-10-05 11:33:22 CEST

Conclusion

This is a smart solution for this kind of need. The only cost is the ODA storage, and everyone know that it’s quite expensive. But cloning the vdisk is lightning fast on ODA, so VM downtime is limited to the minimum.