Introduction

You can say everything about Oracle infrastructure possibilities but nothing compares to experience based on the realization of real projects. I did quite a lot of ODA projects during the past years (on other platforms too), and I would like to tell you why I trust this solution. And for that, I will tell you the story of one of my latest ODA project.

Before choosing ODAs

A serious audit of the current infrastructure is needed, don’t miss that point. Sizing the storage, the memory, the licenses takes several days, but you will need that study. You cannot take good decisions being blind.

The metrics I would recommend to collect for designing your new ODA infrastructure are:
– the needed segregation (PROD/UAT/DEV/TEST/DR…)
– the DBTime from actual databases consolidated for each group
– the size and growth forecast for each database
– the memory usage for SGA and PGA, and the memory advised by Oracle inside each database
– an overview of future projects coming in the next years
– an overview of legacy applications that will stop to use Oracle in the next years

Once done, this audit will help you to choose how many ODAs, which type of ODA, how many disks on each ODA and so on. For this particular project, the audit was done several months before and led to an infrastructure composed of 6 X8-2M ODAs with various disk configurations.

Before delivery

I usually provide an Excel file to my customer for collecting essential information for the new environment:
– hostnames of the servers
– purpose of each server
– public network configuration (IP, netmask, gateway, DNS, NTP, domain)
– ILOM network configuration (IP, netmask, gateway – this is for the management interface)
– additional networks configuration (for backup or administration networks if additional interfaces have been ordered)
– the DATA/RECO repartition of the disks
– the target uid and gid for Linux users on ODA
– the version(s) of the database needed
– the number of cores to enable on each ODA (if using Enterprise Edition – this is how licenses are configured)

With this file, I’m pretty sure to have almost everything before the server are delivered.

Delivery day

Once the ODAs are delivered, you’ll first need to rack them into your datacenters and plug the power and network cables. If you are well organized, you’ll have done the network patching before and the cable would be ready to plug in. Racking an ODA S and M is easy, less than one hour. Even less if you’re used to racking servers. For ODA HA it’s a little bit more complicated, because 1 ODA HA is actually 2 servers and 1 DAS storage enclosure, or 2 DAS enclosures if you ordered the maximum disk configuration. But these are normal servers, and it shouldn’t be too long or too complicated.

1st Day

The first day is an important day because you can do a lot if everything is prepared.

You’ll need several zipfiles to download from MOS to deploy an ODA, and these zipfiles are quite big, 10 to 40GB depending on your configuration. Don’t wait to download them from the links in the documentation. You can choose an older software version than current one, but commonly it’s a good idea to deploy the very latest version. You need to download:
– the ISO file for bare metal reimaging
– the GI clone
– the DB clones for the database versions you planned to use
– the patch files, because you probably need them (we will see why later)

During the download, you can connect to the ILOM IP of the server and configure static IP address. Gathering the IP of the ILOM will need the help from the network/system team, because each ILOM first looks up for a dynamic IP from DHCP, therefore you need to change it.

Once everything is downloaded, you can start the reimaging of the first ODA from the ILOM of the server, then a few minutes later on the second one if you’ve got multiple ODAs. After the reimaging, you will use the configure-firstnet script on each ODA to configure basic network settings on public interface to be able to connect to the server itself.

Once the first ODA is ready, I usually prepare the json file for appliance deployment from the template and according to the settings provided in the excel file. It takes me about 1 hour to make sure nothing is wrong or missing, and then I start the appliance creation. I always create the appliance with a DBTEST database to make sure everything is fine up to database creation. During the first appliance creation, I copy the json file to the other ODA, change the few parameters that differ from the previous one, and also start the appliance creation.

Once done, I deploy the additional dbhomes if needed on both ODAs in parallel.

Then, I check the version of the components with odacli describe-components. You may know that a reimage does not update the microcodes of the ODAs, you should be aware of that. If firmware/bios/ILOM are not up-to-date, you need to apply the patch on top of your deployment, even if the software side is OK. So copy the patch on both ODAs, and apply it. It will probably need a reboot or two.

Once done, I usually configure the number of cores with odacli update-cpucores to match the license. If my customer only has Standard Edition licenses, I also decrease the number of cores to make sure to benefit from maximum CPU speed. See why here.

Finally, I do a sanity check of the 2 ODAs, checking if everything is running fine.

At the end of this first day, 2 ODAs are ready to use.

2nd Day

I’m quite anxious about my ODAs not being exactly identical. So the next day, I deployed the other ODAs. For this particular project, it was a total of 6 ODAs with various disk and license configuration, but the deployment method is the same.

Based on what I did the previous day, I deployed quite easily the 4 other ODAs with their own json deployment file. Then I applied the patch, configured the license, and did the sanity checks.

At the end of this second day, my 6 ODAs were ready to use.

3rd day

2 ODAs came with hardware warnings, and most of the time these warning are false positive. So I did a check from the ILOM CLI, reset the alerts and restart the ILOM. That solved the problem.

As documentation is something quite important for me and for my customer, I spent the rest of the day on consolidating all information and provide a first version of the documentation to the customer.

4th Day

The fourth day was actually the next week. During this time, my customer created a first development database and put it on “production” without any problem.

With every new ODA software release, new features are available. And I think it’s worth it to test these features because it can bring you something very usefull.

This time, I was quite curious about the Data Guard feature now included with odacli. Data Guard manual configuration is always quite long, it’s not very difficult but a lot of steps are needed to achieve this kind of configuration. And you have to do it for each database, meaning that it can takes days to put all your databases under Data Guard protection.

So I proposed to my customer to test the Data Guard implementation with odacli. I created a test database dedicated for that purpose, and followed the online documentation. It took me all the day, but I was able to create the standby, to configure Data Guard, to do a switchover, to do the switchback and to make a clean and complete procedure for the customer. You need to do that because the documentation has 1 or 2 steps that need more accuracy, and 1 or 2 others that need to be adapt to the customer’s environment.

This new feature will definitely simplify the Data Guard configuration, please take a look at my blog post if you’d like to test it.

5th Day

The goal of this day was to make a procedure for configuring Data Guard between the old platform and the new one. An ACFS to ASM conversion was needed, as well. So we worked on that point, made a lot of tests and finally provide a procedure for most cases. A DUPLICATE DATABASE FOR STANDBY with a BACKUP LOCATION was used in that procedure.

This procedure is not ODA specific, most of the advanced operations on ODA are done using classic tools.

6th, 7th and 8th days

These days were dedicated to the ODA workshop. It’s the perfect timing for this training, because the customer already had quite a lot of information during the deployment, and the servers are not yet on production meaning that we can use them for demos and exercises. At dbi services, we make our own training material. Regarding the ODA workshop, it’s start from the history of ODA until the lifecycle management of the plaform. You need 2 to 3 days to have a good overview of the solution and to get ready to work on it.

9th and 10th days

These days were actually extra days in my opinion, extra days are not useless day. Most often, it’s a good addition because there’s always the need to go deeper for some themes.

This time, we need to refine the Data Guard procedure and test it with older versions: 12.2, 12.1 and 11.2. We discovered that it didn’t work for 11.2, and we tried to debug. Finally, we decided to use only the ODA Data Guard feature for 12.1 and later versions, it was OK because only a few databases will not be able to go higher than 11.2. We also found that configuring Data Guard from scratch only takes 40 minutes, including the backup/restore operations (for the smallest possible database), it definitely validated the efficiency of this method over manual configuration.

We also studied the best way to create additional listeners, because odacli does not include the listener management. Using srvctl to do that was quite convenient and clean, so we provided a procedure to configure these listeners, and we also tested the Data Guard feature with these dedicated listeners, it was OK.

Last task was to provide a procedure for ACFS to ASM migration. Once migrated to 19c, 11gR2 databases can be moved to ASM to get rid of ACFS (as ASM has been chosen by the customer for all 12c and later databases). odacli does not provide a mechanism to move from ACFS to ASM, but it’s possible to restore an ACFS database to ASM quite easily.

Actually, these 2 days were very efficient. And I also had enough time to send the v1.1 of the documentation with all the procedures we elaborated together with my customer.

The next days

For this project, my job was limited to deploying/configuring the ODAs and training my customer plus giving him the best practices. With a team of experimented DBAs, it will be quite easy now to continue without me, the next task being to migrate each database to this new infrastructure. Even on ODA, the migration process takes days because there is a lot of databases coming from different versions.

Conclusion

ODA is a great platform to optimize your work, starting from the migration project. You don’t loose time because this solution is ready to use. I don’t think it’s possible to be as efficient with another on-premise platform. For sure, you will save a lot of time, you will have less problems, you will manage everything by yourself, the performance will be good. ODA is probably the best option for achieving this kind of project with minimal risk and maximal efficiency.