Introduction

Oracle Database Appliance X10 lineup is available since September 2023. Compared to X9-2 lineup, biggest changes are the AMD Epyc processors replacing Intel Xeons, and new license model regarding Standard Edition 2, clarified by Oracle several weeks ago. Apart from these new things, the models are rather similar to previous ones, with the Small model for basic needs, a HA model with RAC and high-capacity storage for big critical databases, and a much more popular Large model for most of the needs.

2 kinds of disks inside the ODA X10-L

The ODA I’ve worked on is a X10-L with 2x disk expansions, meaning that I have the 2x 6.8TB disks from the base configuration, plus 4x 6.8TB additional disks. The first 4 disks are classic disks visible on the front panel of the ODA. As there are only 4 bays in the front, the other disks are internal, called AIC for Add-In Card (PCIe). You can have up to 3 disk expansions, meaning 4x disks in the front and 4x AIC disks inside the server. You should know that only the front disks are hot swappable. The other disks being PCIe cards, you will need to shut down the server and open its cover to remove, add or replace a disk. 6.8TB is the RAW capacity, consider that real capacity is something like 6.2TB, but usable capacity will be lower as you will need to use ASM redundancy to protect your blocks. In the ODA documentation, you will find the usable capacity for each disk configuration.

2 AIC disks inside an ODA X10-L. The first 4 disks are in the front.

First contact with X10-L – using odacli

odacli describe-system is very useful for an overview of the ODA you’re connected to:

odacli describe-system
Appliance Information
----------------------------------------------------------------
                     ID: 3fcd1093-ea74-4f41-baa1-f325b469a3e1
               Platform: X10-2L
        Data Disk Count: 10
         CPU Core Count: 4
                Created: January 10, 2024 2:26:43 PM CET

System Information
----------------------------------------------------------------
                   Name: dc1oda002
            Domain Name: ad.dbiblogs.ch
              Time Zone: Europe/Zurich
             DB Edition: EE
            DNS Servers: 10.100.50.8 10.100.50.9
            NTP Servers: 10.100.50.8 10.100.50.9


Disk Group Information
----------------------------------------------------------------
DG Name                   Redundancy                Percentage
------------------------- ------------------------- ------------
DATA                      NORMAL                    85
RECO                      NORMAL                    15

Data Disk Count is not what I’ve expected. This is normally the number of DATA disks, it should be 6 on this ODA, not 10.

Let’s do a show disk with odaadmcli:

odaadmcli show disk
        NAME            PATH            TYPE            STATE           STATE_DETAILS
        pd_00           /dev/nvme0n1    NVD             ONLINE          Good
        pd_01           /dev/nvme1n1    NVD             ONLINE          Good
        pd_02           /dev/nvme3n1    NVD             ONLINE          Good
        pd_03           /dev/nvme2n1    NVD             ONLINE          Good
        pd_04_c1        /dev/nvme8n1    NVD             ONLINE          Good
        pd_04_c2        /dev/nvme9n1    NVD             ONLINE          Good
        pd_05_c1        /dev/nvme6n1    NVD             ONLINE          Good
        pd_05_c2        /dev/nvme7n1    NVD             ONLINE          Good

OK, this command only displays the DATA disks, so the system disks are not in this list, but there are still 8 disks and not 6.

Let’s have a look on the system side.

First contact with X10-L – using system commands

What is detected by the OS?

lsblk | grep disk
nvme9n1                      259:0       0   3.1T  0 disk
nvme6n1                      259:6       0   3.1T  0 disk
nvme8n1                      259:12      0   3.1T  0 disk
nvme7n1                      259:18      0   3.1T  0 disk
nvme4n1                      259:24      0 447.1G  0 disk
nvme5n1                      259:25      0 447.1G  0 disk
nvme3n1                      259:26      0   6.2T  0 disk
nvme0n1                      259:27      0   6.2T  0 disk
nvme1n1                      259:28      0   6.2T  0 disk
nvme2n1                      259:29      0   6.2T  0 disk
asm/acfsclone-242            250:123905  0   150G  0 disk  /opt/oracle/oak/pkgrepos/orapkgs/clones
asm/commonstore-242          250:123906  0     5G  0 disk  /opt/oracle/dcs/commonstore
asm/odabase_n0-242           250:123907  0    40G  0 disk  /u01/app/odaorabase0
asm/orahome_sh-242           250:123908  0    80G  0 disk  /u01/app/odaorahome

This is rather strange. I can see 10 disks, the 2x 450GB disks are for the system (and normally not considered as DATA disks by odacli), I can also find 4x 6.2TB disks. But instead of having 2x additional 6.2TB disks, I have 4x 3.1TB disks. The overall capacity is OK, 37.2TB, but it’s different compared to previous ODA generations.

Let’s confirm this with fdisk:

fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 6.2 TiB, 6801330364416 bytes, 13283848368 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E20D9013-1982-4F66-B7A2-5FE0B1BC8F74
 
Device                Start         End    Sectors   Size Type
/dev/nvme0n1p1         4096  1328386047 1328381952 633.4G Linux filesystem
/dev/nvme0n1p2   1328386048  2656767999 1328381952 633.4G Linux filesystem
/dev/nvme0n1p3   2656768000  3985149951 1328381952 633.4G Linux filesystem
/dev/nvme0n1p4   3985149952  5313531903 1328381952 633.4G Linux filesystem
/dev/nvme0n1p5   5313531904  6641913855 1328381952 633.4G Linux filesystem
/dev/nvme0n1p6   6641913856  7970295807 1328381952 633.4G Linux filesystem
/dev/nvme0n1p7   7970295808  9298677759 1328381952 633.4G Linux filesystem
/dev/nvme0n1p8   9298677760 10627059711 1328381952 633.4G Linux filesystem
/dev/nvme0n1p9  10627059712 11955441663 1328381952 633.4G Linux filesystem
/dev/nvme0n1p10 11955441664 13283823615 1328381952 633.4G Linux filesystem

fdisk -l /dev/nvme8n1
Disk /dev/nvme8n1: 3.1 TiB, 3400670601216 bytes, 6641934768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: A3086CB0-31EE-4F78-A6A6-47D53149FDAE

Device              Start        End    Sectors   Size Type
/dev/nvme8n1p1       4096 1328386047 1328381952 633.4G Linux filesystem
/dev/nvme8n1p2 1328386048 2656767999 1328381952 633.4G Linux filesystem
/dev/nvme8n1p3 2656768000 3985149951 1328381952 633.4G Linux filesystem
/dev/nvme8n1p4 3985149952 5313531903 1328381952 633.4G Linux filesystem
/dev/nvme8n1p5 5313531904 6641913855 1328381952 633.4G Linux filesystem

OK, the 6.2TB disks are split in 10 partitions, and the 3.1TB disks are split in 5 partitions. It makes sense because ASM needs partitions of the same size inside a diskgroup.

First contact with X10-L – using ASM

Now let’s have a look within ASM, the most important thing being that ASM is able to manage the storage correctly:

su - grid
sqlplus / as sysasm
select a.name "Diskgroup" , round(a.total_mb/1024) "Size GB",  round(a.free_mb/1024) "Free GB",   round(a.usable_file_mb/1024) "Usable GB", 100*round((a.total_mb-a.free_mb)/a.total_mb,1) "Use%" from  v$asm_diskgroup a ;
Diskgroup                         Size GB    Free GB  Usable GB       Use%
------------------------------ ---------- ---------- ---------- ----------
DATA                                30404      12341       3637         60
RECO                                 7601       5672       2203         30


select name,total_mb/1024 "GB", GROUP_NUMBER from v$asm_disk order by 3 desc;
NAME                                   GB GROUP_NUMBER
------------------------------ ---------- ------------
NVD_S02_S6UENA0W1072P9         633.421875            2
NVD_S02_S6UENA0W1072P10        633.421875            2
NVD_S05_C2_PHAZ25110P9         633.421875            2
NVD_S05_C2_PHAZ25110P10        633.421875            2
NVD_S03_S6UENA0W1073P10        633.421875            2
NVD_S00_S6UENA0W1075P10        633.421875            2
NVD_S01_S6UENA0W1072P10        633.421875            2
NVD_S00_S6UENA0W1075P9         633.421875            2
NVD_S01_S6UENA0W1072P9         633.421875            2
NVD_S04_C2_PHAZ24710P10        633.421875            2
NVD_S04_C2_PHAZ24710P9         633.421875            2
NVD_S03_S6UENA0W1073P9         633.421875            2
NVD_S05_C1_PHAZ25110P2         633.421875            1
NVD_S00_S6UENA0W1075P8         633.421875            1
NVD_S04_C2_PHAZ24710P6         633.421875            1
NVD_S00_S6UENA0W1075P6         633.421875            1
NVD_S02_S6UENA0W1072P1         633.421875            1
NVD_S05_C1_PHAZ25110P3         633.421875            1
NVD_S04_C1_PHAZ24710P2         633.421875            1
NVD_S03_S6UENA0W1073P4         633.421875            1
NVD_S00_S6UENA0W1075P2         633.421875            1
NVD_S02_S6UENA0W1072P6         633.421875            1
NVD_S05_C2_PHAZ25110P6         633.421875            1
NVD_S00_S6UENA0W1075P7         633.421875            1
NVD_S00_S6UENA0W1075P5         633.421875            1
NVD_S04_C2_PHAZ24710P7         633.421875            1
NVD_S04_C2_PHAZ24710P8         633.421875            1
NVD_S02_S6UENA0W1072P4         633.421875            1
NVD_S03_S6UENA0W1073P7         633.421875            1
NVD_S00_S6UENA0W1075P1         633.421875            1
NVD_S04_C1_PHAZ24710P1         633.421875            1
NVD_S01_S6UENA0W1072P2         633.421875            1
NVD_S01_S6UENA0W1072P1         633.421875            1
NVD_S01_S6UENA0W1072P3         633.421875            1
NVD_S03_S6UENA0W1073P5         633.421875            1
NVD_S01_S6UENA0W1072P4         633.421875            1
NVD_S02_S6UENA0W1072P3         633.421875            1
NVD_S01_S6UENA0W1072P7         633.421875            1
NVD_S02_S6UENA0W1072P7         633.421875            1
NVD_S05_C1_PHAZ25110P1         633.421875            1
NVD_S03_S6UENA0W1073P1         633.421875            1
NVD_S01_S6UENA0W1072P5         633.421875            1
NVD_S05_C1_PHAZ25110P5         633.421875            1
NVD_S04_C1_PHAZ24710P3         633.421875            1
NVD_S00_S6UENA0W1075P3         633.421875            1
NVD_S03_S6UENA0W1073P6         633.421875            1
NVD_S02_S6UENA0W1072P8         633.421875            1
NVD_S00_S6UENA0W1075P4         633.421875            1
NVD_S04_C1_PHAZ24710P5         633.421875            1
NVD_S01_S6UENA0W1072P8         633.421875            1
NVD_S04_C1_PHAZ24710P4         633.421875            1
NVD_S03_S6UENA0W1073P3         633.421875            1
NVD_S05_C1_PHAZ25110P4         633.421875            1
NVD_S03_S6UENA0W1073P2         633.421875            1
NVD_S01_S6UENA0W1072P6         633.421875            1
NVD_S05_C2_PHAZ25110P8         633.421875            1
NVD_S02_S6UENA0W1072P5         633.421875            1
NVD_S02_S6UENA0W1072P2         633.421875            1
NVD_S03_S6UENA0W1073P8         633.421875            1
NVD_S05_C2_PHAZ25110P7         633.421875            1

This is also different from older generations. On X9-2 lineup, the RECO diskgroup (group 2) has as many partitions as diskgroup DATA (group 1). This is no more the way it works. But regarding the free and usable GB, everything is fine.

Why Oracle did differently from older ODAs?

Oldest ODAs were using spinning disks, and for maximizing performance, Oracle created 2 partitions on each disk: one for DATA on the external part of the disk, and one for RECO on the internal part of the disk. All the DATA partitions were then added to the DATA diskgroup, and all RECO partitions to the RECO diskgroup. The more disks you had, the faster the read and write speed was. Redologs had their own dedicated disks, usually 4 small SSDs using high redundancy.

Nowadays, ODAs are mostly using SSDs, and read and write speed is identical wherever the block is. And the number of disks doesn’t matter, speed is mostly limited by the PCIe bandwidth and chips on the SSDs, but as far as I know, the speed of one NVMe SSD is enough for 95% of the databases.

Internal AIC disks on ODA X10-L are split in two disks for some reasons, so it’s not possible anymore to have the big partitions we had before.

Conclusion

This X10-L was initially deployed using version 19.21. As it wasn’t already in use, and to make sure everything is fine on the hardware and software side, it was decided to do a fresh reimaging using latest 19.22. It didn’t change anything, odacli still sees 10 disks, but apart from that, everything is fine.

Disk size and partitioning is now different, but it won’t change anything for most of us.