Franck's Oracle blog
Blog of Franck Pachot, Consultant at dbi services
I'm a Senior Consultant, and Oracle Technology Leader at dbi services (Switzerland).
Certified DBA (OCM 11g, OCP 12c, Performance Tuning Expert, Exadata Implementation) I cover all database areas: architecture, data modeling, database design, tuning, operation, and training.
My preferred area is troubleshooting oracle and performance tuning, especially when I acheive to enable an efficient collaboration between the developers and the
As an Oracle Ace, I participate in the Oracle Community in forums, blogs, articles and presentation. You can follow my activity on this blog: and my twitter account: @FranckPachot
12c is coming with more dynamic sampling, now called dynamic statistics and using the new Adaptive Dynamic Sampling algorithm. The goal is to have better estimations and better estimations gives better execution plans. However, this new approach will increase parse time because dynamic sampling kicks in more often, reads more blocs, and run more queries.
It's probably not a problem for applications that are well designed, using bind variables to avoid to many parses, having good statistics (histograms where it makes sense, extended statistics for correlated columns). The SQL Plan Directives are there to trigger dynamic sampling only where misestimates have been observed. An OLTP application should not parse often, and should not have huge misestimates. A reporting use-case can spend more time on parsing and the few seconds spend to do dynamic sampling will probably benefit to the execution time.
In addition to that, in order to lower the dynamic sampling overhead, Oracle 12c Adaptive Dynamic Sampling run its queries with the /*+ result_cache(snapshot=3600) */ hint. The result is cached in the result cache and is not invalidated by dependencies. So even when the underlying table is updated, the dynamic sampling result is still valid in cache for 3600 seconds. This is why doing more dynamic sampling is not a big overhead according that:
- your result cache is sized accordingly. The default (0.25% of MEMORY_TARGET or 0.5% of SGA_TARGET or 1% of SHARED_POOL_SIZE) is probably too low to fit all the dynamic sampling result for frequently parsed statements.
- your result cache is enabled, meaning that you are in Enterprise Edition
If you have a bad application (not using bind variables, parse at each execution) and you are in Standard Edition, then there is a risk that the current parse contention you suffer from (CPU and latches) will be more problematic (more CPU and I/O). Let's try the following:
declare c sys_refcursor; begin for i in 1..100 loop open c for 'select count(*) COUNT'||i||' from DEMO_TABLE where a+b=c+d'; dbms_sql.return_result(c); null; end loop; end; /which run 100 times the same statement not using bind variables. So I'm parsing it each time, but it's reading the same table with same predicate, so the result of dynamic sampling should not change a lot.
I'll run it in Standard and Enterprise editions, with no dynamic sampling, and with the new AUTO level.
Today I've presented SQL Plan Directives at the SOUG Romandie event. I had a question about the import/export of directives by Data Pump. The idea is that a lot of testing has been done on QA in order to validate the upgrade to 12c. A few directives had bad consequences (see Ludovico Caldara blog post for an example), then directives have been disabled. When going to production, they want to start with those directives disabled. Yes, they can be imported. We have to pack them into a staging table, import that table, and unpack them. It's similar tho what we do with SQL Plan Baselines.
This blog post is something I had in draft and Laurent Schneider blog post reminds me to publish it. With the right RMAN configuration you should not have to managed backup files yourself. The RMAN catalog knows them and RMAN should be able to access them. If you want to keep a backup for a long time, you just tell RMAN to keep it.
But sometimes, RMAN is not connected to your tape backup software, or the backups are not shared on all sites, and you have to restore or copy the set of files that is needed for a restore database or a duplicate database.
If you followed the 1M rows insert variations, you have seen how something simple brought me into an investigation with the latest tools available to see where the time is spent. When the time is CPU time, there is no wait event to get the detail of DB Time. I've used perf events in the previous post and I'll now visualize them with CPU Flame Graph. My goal is to understand why my test cases is 2x longer in 126.96.36.199 vs 188.8.131.52
The idea to use Flame Graphs come from Luca Canali:
In previous post I explained that I've observed a 2x CPU usage when running my insert test case in 184.108.40.206 vs. 220.127.116.11 I discussed that with @martinberx @OracleSK @TanelPoder @xtner @Ycolin @fritshoogland @lleturgez @LucaCanaliDB on twitter and it became clear that:
This year is the year of migration to 12c. Each Oracle version had its CBO feature that make it challenging. The most famous was the bind variable peeking in 9iR2. Cardinality feedback in 11g also came with some surprises. 12c comes with SPD in any edition, which is accompanied by Adaptive Dynamic Sampling. If you want to know more about them, next date is in Switzerland: http://www.soug.ch/events/sig-150521-agenda.html
In the first variation I did a bulk insert with the PL/SQL FORALL statement. A comment suggests to add the APPEND_VALUES hint that appeared in 11gR2. APPEND_VALUES hint does the same than APPEND but it's for inserts with VALUES clause: It inserts in direct-path - directly into the datafile, bypassing the buffer cache. Of course, this makes sense only with bulk insert because you don't want to do that with only one row.