Another full day in Manchester, I choose to follow a stream a little bit different in the morning. So it will not be all about the Oracle Optimizer 12c today, but more about O.S and virtualization.
Virtualization is an interesting topic but it’s also complex because if the concept seems simple, there are problematics with the licensing, the support and the performance.
Why use Oracle VM?
Everyone thinks about VMware when speaking about the first two points and Oracle because there are know issues in that case.
I’ve seen a session hosted by Mickey Bharat, director virtualization & Linux EMEA channels from Oracle. As expected it wasn’t a technical session but a discussion about the interest of using Oracle VM. He gave also an explanation for the issues (licensing/support): there are no agreements between Oracle and the other vendors(competitors). So Oracle ‘cannot’ certifies neither the plaform neither the way of doing partitionning for CPU resources.
In that talk, Mr. Bharat assumed that VMware and Hyper-V are currently the leaders and are pretty good in doing what they are doing. He also assumed that currently a company has many providers/suppliers and that it will always the case in the future (even using Cloud services).
The Oracle’s strategy regarding virtualization is as simple as for engineered system, you should use a system that is designed to run the workload you want to run on it. Oracle VM is a product which solves the licencing and support issues. Moreover, it has been designed for running Oracle application and integrated in the management platform.
So Oracle tells us we have the choice to pick at any place in the Oracle’s stack. If you already invested in IBM/HP/other hardware provider and VMware, it’s not a problem. You should consider running Oracle VM not to replace VMware but to run in parallel for Oracle’s workload because it has been designed for that.
I had to say it’s the Oracle’s point of view but it makes sense when looking at their current strategy and for future integration with Cloud services.
Back again closer to my domain. I attended two sessions about the optimizer:
- Beginners’ Guide to Cost Based Optimization from Jonathan Lewis
- Understanding Optimizer statistics from Tom Kyte
Jonathan Lewis reminds us that cost is here to express time and showed with examples that it’s directly estimated from the amount of physical work that will be necessary to get the data. He also demonstrated that by design there are some cases where the optimizer will make bad assumptions.
Tom Kyte’s session was more about the statistics. The more accurate the statistics are the most precise the optimizer estimations for the cardinalities should be. And as a consequence, the plan should be better. Let know the optimizer what the data looks like is the key point to have a good execution plan and I had the same problematic in my session.
There is a lot to tell about the optimizer but the main message I would spread is don’t play with the default values unless you need it. For the dbms_stats preferences as well as for the *optimizer* parameters, let the default values.
Then when you have a problem you have to find out, is this a general problem or is this a problem for a few statements? Only if you’re in the first case you should consider changing a global parameter. In the second case, you should analyze your issue and understand why you have the problem to be able to find a suitable solution.
Tomorrow will be the last day but we will not forget you and there will be again some bloging !