{"id":14280,"date":"2020-06-10T04:49:58","date_gmt":"2020-06-10T02:49:58","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/"},"modified":"2020-06-10T04:49:58","modified_gmt":"2020-06-10T02:49:58","slug":"oracle-12c-clustering","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/","title":{"rendered":"Oracle 12c &#8211; reorg and split table with clustering"},"content":{"rendered":"<h2>By Franck Pachot<\/h2>\n<p>.<br \/>\nIn this series of small examples on recent features, I have imported in a <a href=\"https:\/\/www.dbi-services.com\/blog\/oracle-index-on-joins\/oracle-select-from-file\" rel=\"noopener noreferrer\" target=\"_blank\">previous post<\/a>, the statistics of covid-19 per day and per countries. This is typical of data that comes as a time-series ordered by date, because this is how it is generated day after day, but where you probably want to query from another dimension, like per countries.<\/p>\n<p>If you want to ingest data faster, you keep it in the order of arrival, and insert it in heap table blocks. If you want to optimize for the future queries on the other dimension, you may load it in a table with a specialized organization where each row has its place: an Index Organized Table, a Hash Cluster, a partitioned table, or a combination of those. With Oracle we are used to storing data without the need to reorganize it. It is a multi-purpose database. But in 12c we have many features that make this reorganization easier, like partitioning, online move and online split. We can then think about a two-phase lifecycle for some operational tables that are used later for analytics:<\/p>\n<ul>\n<li>Fast ingest and query on short time window: we insert data on the flow, with conventional inserts, into a conventional heap table. Queries on recent data is fast as the rows are colocated as they arrived.<\/li>\n<li>Optimal query on history: regularly we reorganize physically the latest ingested rows, to be clustered on another dimension, because we will query for a large time range on this other dimension<\/li>\n<\/ul>\n<p>Partitioning is the way to do those operations. We can have a weekly partition for the current week. When the week is over new rows will go to a new partition (11g PARTITION BY RANGE &#8230; INTERVAL) and we can optionally merge the old partition with the one containing old data, per month or year for example, to get larger time ranges for the past data. This merge is easy (18c MERGE PARTITIONS &#8230; ONLINE). And while doing that we can reorganize rows to be clustered together. This is what I&#8217;m doing in this post.<\/p>\n<h3>Partitioning <\/h3>\n<p>From the table, I have created in the previous post I create an index on GEOID (as the goal is to query by countries) and I partition it by range on DATEREP:<\/p>\n<pre><code>\nSQL&gt; create index covid_geoid on covid(geoid);\n\nIndex created.\n\nSQL&gt; alter table covid modify partition by range(daterep) interval (numToYMinterval(1,'year')) ( partition old values less than (date '2020-01-01') , partition new values less than (date '2021-01-01') ) online;\n\nTable altered.\n<\/code><\/pre>\n<p>This is an online operation in 12cR2. So I have two partitions, one for &#8220;old&#8221; data and one for &#8220;new&#8221; data.<\/p>\n<p>I query all dates for one specific country:<\/p>\n<pre><code>\nSQL&gt; select trunc(daterep,'mon'), max(cases) from covid where geoid='US' group by trunc(daterep,'mon') order by 1\n  2  \/\n   TRUNC(DATEREP,'MON')    MAX(CASES)\n_______________________ _____________\n01-DEC-19                           0\n01-JAN-20                           3\n01-FEB-20                          19\n01-MAR-20                       21595\n01-APR-20                       48529\n01-MAY-20                       33955\n01-JUN-20                       25178\n<\/code><\/pre>\n<p>This reads rows scattered through the whole table because they were inserted day after day.<\/p>\n<p>This is visible in the execution plan: the optimizer does not use the index but a full table scan:<\/p>\n<pre><code>\nSQL&gt; select * from dbms_xplan.display_cursor(format=&gt;'+cost iostats last')\n  2  \/\n                                                                                       PLAN_TABLE_OUTPUT\n________________________________________________________________________________________________________\nSQL_ID  2nyu7m59d7spv, child number 0\n-------------------------------------\nselect trunc(daterep,'mon'), max(cases) from covid where geoid='US'\ngroup by trunc(daterep,'mon') order by 1\n\nPlan hash value: 4091160977\n\n-----------------------------------------------------------------------------------------------------\n| Id  | Operation            | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |\n-----------------------------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT     |       |      1 |        |    55 (100)|      7 |00:00:00.01 |     180 |\n|   1 |  SORT ORDER BY       |       |      1 |     77 |    55   (4)|      7 |00:00:00.01 |     180 |\n|   2 |   PARTITION RANGE ALL|       |      1 |     77 |    55   (4)|      7 |00:00:00.01 |     180 |\n|   3 |    HASH GROUP BY     |       |      2 |     77 |    55   (4)|      7 |00:00:00.01 |     180 |\n|*  4 |     TABLE ACCESS FULL| COVID |      2 |    105 |    53   (0)|    160 |00:00:00.01 |     180 |\n-----------------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n   4 - filter(\"GEOID\"='US')\n<\/code><\/pre>\n<p>This has read 180 blocks, with multiblock reads.<\/p>\n<p>I force the access by index in order to compare the cost:<\/p>\n<pre><code>\nSQL&gt; select \/*+ index(covid) *\/ trunc(daterep,'mon'), max(cases) from covid where geoid='US' group by trunc(daterep,'mon') order by 1\n  2  \/\n\n   TRUNC(DATEREP,'MON')    MAX(CASES)\n_______________________ _____________\n01-DEC-19                           0\n01-JAN-20                           3\n01-FEB-20                          19\n01-MAR-20                       21595\n01-APR-20                       48529\n01-MAY-20                       33955\n01-JUN-20                       25178\n\nSQL&gt; select * from dbms_xplan.display_cursor(format=&gt;'+cost iostats last')\n  2  \/\n                                                                                                                     PLAN_TABLE_OUTPUT\n______________________________________________________________________________________________________________________________________\nSQL_ID  2whykac7cnjks, child number 0\n-------------------------------------\nselect \/*+ index(covid) *\/ trunc(daterep,'mon'), max(cases) from covid\nwhere geoid='US' group by trunc(daterep,'mon') order by 1\n\nPlan hash value: 2816502185\n\n-----------------------------------------------------------------------------------------------------------------------------------\n| Id  | Operation                                    | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |\n-----------------------------------------------------------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT                             |             |      1 |        |    95 (100)|      7 |00:00:00.01 |     125 |\n|   1 |  SORT ORDER BY                               |             |      1 |     77 |    95   (3)|      7 |00:00:00.01 |     125 |\n|   2 |   HASH GROUP BY                              |             |      1 |     77 |    95   (3)|      7 |00:00:00.01 |     125 |\n|   3 |    TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| COVID       |      1 |    105 |    93   (0)|    160 |00:00:00.01 |     125 |\n|*  4 |     INDEX RANGE SCAN                         | COVID_GEOID |      1 |    105 |     1   (0)|    160 |00:00:00.01 |       2 |\n-----------------------------------------------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n   4 - access(\"GEOID\"='US')\n<\/code><\/pre>\n<p>Even if the number of blocks is a bit smaller, 125 blocks, they are single block reads and then the cost is higher: 95 for index access when the full table scan was 55. Using hints and comparing the cost is how I often try to understand the optimizer choice and here the reason is clear: because rows are scattered, the clustering factor of the index access is really bad.<\/p>\n<p>I said that I want to merge the partitions. And maybe reorg with an online table move. But now, for this second phase of the lifecycle, I want to cluster rows on the country dimension rather than on arrival date.<\/p>\n<h3>Attribute clustering<\/h3>\n<p>This preference can be declared on the table with 12c Attribute Clustering:<\/p>\n<pre><code>\nSQL&gt; alter table covid add clustering by linear order (continentexp, countriesandterritories);\n\nTable altered.\n<\/code><\/pre>\n<p>You see that I can mention multiple columns and I don&#8217;t need to use the GEOID column that I will use to query. This is not an index. This just a preference to cluster rows and, if they are clustered on the country name, they will be also clustered on continent, country code, geoid,&#8230; I have chosen those columns for clarity when reading the DDL:<\/p>\n<pre><code>\nSQL&gt; exec dbms_metadata.set_transform_param(DBMS_METADATA.SESSION_TRANSFORM,'SEGMENT_ATTRIBUTES',false);\n\nPL\/SQL procedure successfully completed.\n\nSQL&gt; ddl covid\n\n  CREATE TABLE \"COVID\"\n   (    \"DATEREP\" DATE,\n        \"N_DAY\" NUMBER,\n        \"N_MONTH\" NUMBER,\n        \"N_YEAR\" NUMBER,\n        \"CASES\" NUMBER,\n        \"DEATHS\" NUMBER,\n        \"COUNTRIESANDTERRITORIES\" VARCHAR2(50),\n        \"GEOID\" VARCHAR2(10),\n        \"COUNTRYTERRITORYCODE\" VARCHAR2(3),\n        \"POPDATA2018\" NUMBER,\n        \"CONTINENTEXP\" VARCHAR2(10)\n   )\n CLUSTERING\n BY LINEAR ORDER (\"COVID\".\"CONTINENTEXP\",\n  \"COVID\".\"COUNTRIESANDTERRITORIES\")\n   YES ON LOAD  YES ON DATA MOVEMENT\n WITHOUT MATERIALIZED ZONEMAP\n  PARTITION BY RANGE (\"DATEREP\") INTERVAL (NUMTOYMINTERVAL(1,'YEAR'))\n (PARTITION \"OLD\"  VALUES LESS THAN (TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) ,\n PARTITION \"NEW\"  VALUES LESS THAN (TO_DATE(' 2021-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) ) ;\n\n  CREATE INDEX \"COVID_GEOID\" ON \"COVID\" (\"GEOID\")\n  ;\n<\/code><\/pre>\n<p>As you can see the default is YES for ON LOAD which means that direct-path inserts will cluster rows, and ON DATA MOVEMENT is also YES which is why merging partitions will also cluster rows.<\/p>\n<p>I&#8217;ve done that afterward here but this is something you can do at table creation. You mention on which attributes you want to cluster. You mention when: direct-path inserts (YES ON LOAD) and\/or table reorganization (YES ON DATA MOVEMENT). This is defined at table level. Beyond those defaults, the table reorganizations (ALTER TABLE &#8230; MOVE, ALTER TABLE &#8230; MERGE PARTITIONS) can explicitly DISALLOW CLUSTERING or ALLOW CLUSTERING.<\/p>\n<h3>Move Partition<\/h3>\n<p>When I have ingested some data and think that it would be better to cluster them, maybe at the time this partition is completed and new inserts go to a higher interval, I can reorganize it with a simple ALTER TABLE &#8230; MOVE:<\/p>\n<pre><code>\nSQL&gt; alter table covid move partition new online allow clustering;\n\nTable altered.\n<\/code><\/pre>\n<p>This will cluster rows together on the clustering attributes. I mentioned ALLOW CLUSTERING to show the syntax but it is the default (YES ON DATA MOVEMENT) anyway here.<\/p>\n<p>At that point, you may also want to compress the old partitions with basic compression (the compression that does not require an additional option but is possible only with bulk load or data movement). However, be careful: the combination of online operation and basic compression requires the Advanced Compression Option. More info in a previous post on <a href=\"https:\/\/www.dbi-services.com\/blog\/segment-maintenance-online-compress\/\" rel=\"noopener noreferrer\" target=\"_blank\">\u201cSegment Maintenance Online Compress\u201d feature usage<\/a>.<\/p>\n<h3>Merge Partition<\/h3>\n<p>As my goal is to cluster data on a different dimension than the time one, I may want to have larger partitions for the past ones. Something like the current partition holding a week of data at maximum, but the past partitions being on quarter or yearly ranges. That can be done with partition merging, which is an online operation in 18c (and note that I have a global index here and an online operation does not invalidate indexes):<\/p>\n<pre><code>\nSQL&gt; alter table covid merge partitions old,new into partition oldmerged online allow clustering;\n\nTable altered.\n<\/code><\/pre>\n<p>This is a row movement and clustering on data movement is enabled. Again I mentioned ALLOW CLUSTERING just to show the syntax.<\/p>\n<p>Let&#8217;s see the number of buffers read now with index accesss. The statistics of the index (clustering factor) has not been updated, so the optimizer may not choose the index access yet (until dbms_stats runs on stale tables). I&#8217;m forcing with an hint:<\/p>\n<pre><code>\nSQL&gt; select \/*+ index(covid) *\/ trunc(daterep,'mon'), max(cases) from covid where geoid='US' group by trunc(daterep,'mon') order by 1;\n\n   TRUNC(DATEREP,'MON')    MAX(CASES)\n_______________________ _____________\n01-DEC-19                           0\n01-JAN-20                           3\n01-FEB-20                          19\n01-MAR-20                       21595\n01-APR-20                       48529\n01-MAY-20                       33955\n01-JUN-20                       25178\n\nSQL&gt; select * from dbms_xplan.display_cursor(format=&gt;'+cost iostats last')\n  2  \/\n                                                                                                                     PLAN_TABLE_OUTPUT\n______________________________________________________________________________________________________________________________________\nSQL_ID  2whykac7cnjks, child number 0\n-------------------------------------\nselect \/*+ index(covid) *\/ trunc(daterep,'mon'), max(cases) from covid\nwhere geoid='US' group by trunc(daterep,'mon') order by 1\n\nPlan hash value: 2816502185\n\n-----------------------------------------------------------------------------------------------------------------------------------\n| Id  | Operation                                    | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |\n-----------------------------------------------------------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT                             |             |      1 |        |    95 (100)|      7 |00:00:00.01 |       8 |\n|   1 |  SORT ORDER BY                               |             |      1 |     77 |    95   (3)|      7 |00:00:00.01 |       8 |\n|   2 |   HASH GROUP BY                              |             |      1 |     77 |    95   (3)|      7 |00:00:00.01 |       8 |\n|   3 |    TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| COVID       |      1 |    105 |    93   (0)|    160 |00:00:00.01 |       8 |\n|*  4 |     INDEX RANGE SCAN                         | COVID_GEOID |      1 |    105 |     1   (0)|    160 |00:00:00.01 |       5 |\n-----------------------------------------------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n   4 - access(\"GEOID\"='US')\n       filter(TBL$OR$IDX$PART$NUM(,0,8,0,\"COVID\".ROWID)=1)\n<\/code><\/pre>\n<p>The cost has not changed (because of the statistics) but the number of buffers read is minimal: only the 8 buffers where all my rows for this country are clustered. Remember that I clustered on the country name but use the GEOID here in my predicate. That doesn&#8217;t matter as long as the rows are together.<\/p>\n<h3>Asynchronous global index maintenance<\/h3>\n<p>Note the strange predicate on TBL$OR$IDX$PART$NUM(,0,8,0,&#8221;COVID&#8221;.ROWID)=1 that results from another 12c feature where global indexes are maintained usable during the partition maintenance (which is required for an online operation) but optimized to be cleaned-out asynchronously later. This is visible from DBA_INDEXES:<\/p>\n<pre><code>\nSQL&gt; select index_name,to_char(last_analyzed,'hh24:mi:ss') last_analyzed,clustering_factor,orphaned_entries from user_indexes where table_name='COVID';\n\n    INDEX_NAME    LAST_ANALYZED    CLUSTERING_FACTOR    ORPHANED_ENTRIES\n______________ ________________ ____________________ ___________________\nCOVID_GEOID    08:33:34                        19206 YES\n\n<\/code><\/pre>\n<p>Orphaned entries mean that some entries in the global index may reference the dropped segment after my MOVE or MERGE and the query has to ignore them.<\/p>\n<p>Those ranges of rowid are determined from the segment concerned, stored in the dictionary:<\/p>\n<pre><code>\nSQL&gt; select * from sys.index_orphaned_entry$;\n   INDEXOBJ#    TABPARTDOBJ#    HIDDEN\n____________ _______________ _________\n       79972           79970 O\n       79972           79971 O\n       79972           79980 O\n       79972           79973 O\n<\/code><\/pre>\n<p>HIDDEN=&#8217;O&#8217; means Orphaned and the ROWIDs addressing these partitions are filtered out from the dirty index entries buy the predicated filter(TBL$OR$IDX$PART$NUM(,0,8,0,&#8221;COVID&#8221;.ROWID)=1) above. <\/p>\n<p>This maintenance of the dirty index will be done during the maintenance window but I can do it immediately to finish my reorganization correctly:<\/p>\n<pre><code>\nSQL&gt; alter index COVID_GEOID coalesce cleanup;\n\nIndex altered.\n\nSQL&gt; select index_name,to_char(last_analyzed,'hh24:mi:ss') last_analyzed,clustering_factor,orphaned_entries from user_indexes where table_name='COVID';\n\n    INDEX_NAME    LAST_ANALYZED    CLUSTERING_FACTOR    ORPHANED_ENTRIES\n______________ ________________ ____________________ ___________________\nCOVID_GEOID    08:33:34                        19206 NO\n\n<\/code><\/pre>\n<p>No orphaned index entries anymore. Note that I could also have called the DBMS_PART.CLEANUP_GIDX procedure to do the same.<\/p>\n<p>This is fine for the query, but as the statistics were not updated, the optimizer doesn&#8217;t know yet how clustered is my table. In order to complete my reorganization and have queries benefiting from this immediately, I gather the statistics:<\/p>\n<pre><code>\nSQL&gt; exec dbms_stats.gather_table_stats(user,'COVID',options=&gt;'gather auto');\n\nPL\/SQL procedure successfully completed.\n\nSQL&gt; select index_name,to_char(last_analyzed,'hh24:mi:ss') last_analyzed,clustering_factor,orphaned_entries from user_indexes where table_name='COVID';\n\n    INDEX_NAME    LAST_ANALYZED    CLUSTERING_FACTOR    ORPHANED_ENTRIES\n______________ ________________ ____________________ ___________________\nCOVID_GEOID    08:38:40                          369 NO\n\n<\/code><\/pre>\n<p>GATHER AUTO gathers only the stale ones, and, as soon as I did my MOVE or MERGE, the index was marked as stale (note that the ALTER INDEX COALESCE does not mark them a stale by itself).<\/p>\n<p>And now my query will use this optimal index without the need for any hint:<\/p>\n<pre><code>\nSQL_ID  2nyu7m59d7spv, child number 0\n-------------------------------------\nselect trunc(daterep,'mon'), max(cases) from covid where geoid='US'\ngroup by trunc(daterep,'mon') order by 1\n\nPlan hash value: 2816502185\n\n-----------------------------------------------------------------------------------------------------------------------------------\n| Id  | Operation                                    | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |\n-----------------------------------------------------------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT                             |             |      1 |        |     7 (100)|      7 |00:00:00.01 |       5 |\n|   1 |  SORT ORDER BY                               |             |      1 |    101 |     7  (29)|      7 |00:00:00.01 |       5 |\n|   2 |   HASH GROUP BY                              |             |      1 |    101 |     7  (29)|      7 |00:00:00.01 |       5 |\n|   3 |    TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| COVID       |      1 |    160 |     5   (0)|    160 |00:00:00.01 |       5 |\n|*  4 |     INDEX RANGE SCAN                         | COVID_GEOID |      1 |    160 |     2   (0)|    160 |00:00:00.01 |       2 |\n-----------------------------------------------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n   4 - access(\"GEOID\"='US')\n\n<\/code><\/pre>\n<p>and, thanks to the coalesce cleanup, there&#8217;s no predicate on orphan ROWIDs anymore.<\/p>\n<p>With this pattern, you may realize that my global index on countries is useful only for past data. Not for the recent one that has not been clustered yet. Then, we can even avoid maintaining the index for this partition. We will see that in the <a href=\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-global-partial-index\" rel=\"noopener noreferrer\" target=\"_blank\">next post<\/a>. it is called partial indexing.<\/p>\n<p>With this pattern, we can even doubt about the need to maintain an index for the old partitions. As all my rows for GEOID=&#8217;US&#8217; were packed in a few contiguous blocks, why not just store the range of ROWIDs rather than the list of it? This is called Zone Maps. But this is only available on Exadata and I like to think about Oracle as a multiplatform database.<\/p>\n<p>Those many features came in the recent releases thanks to the development of the Autonomous Database. When the DBA is a cloud provider, whether it is automated or not, all maintenance must be done online without stopping the application. Those features are the bricks to build automatic lifecycle management and performance optimization.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Franck Pachot . In this series of small examples on recent features, I have imported in a previous post, the statistics of covid-19 per day and per countries. This is typical of data that comes as a time-series ordered by date, because this is how it is generated day after day, but where you [&hellip;]<\/p>\n","protected":false},"author":27,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[955,229,59],"tags":[958,496,38,1990,1489,1949,1231,654,96,209,1909,1419,753],"type_dbi":[],"class_list":["post-14280","post","type-post","status-publish","format-standard","hentry","category-cloud","category-database-administration-monitoring","category-oracle","tag-12cr2","tag-bi","tag-cluster","tag-clustering","tag-dwh","tag-features","tag-move","tag-online","tag-oracle","tag-oracle-12c","tag-partition","tag-reorg","tag-split"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Oracle 12c - reorg and split table with clustering - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Oracle 12c - reorg and split table with clustering\" \/>\n<meta property=\"og:description\" content=\"By Franck Pachot . In this series of small examples on recent features, I have imported in a previous post, the statistics of covid-19 per day and per countries. This is typical of data that comes as a time-series ordered by date, because this is how it is generated day after day, but where you [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2020-06-10T02:49:58+00:00\" \/>\n<meta name=\"author\" content=\"Oracle Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Oracle Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/\"},\"author\":{\"name\":\"Oracle Team\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"headline\":\"Oracle 12c &#8211; reorg and split table with clustering\",\"datePublished\":\"2020-06-10T02:49:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/\"},\"wordCount\":1551,\"commentCount\":0,\"keywords\":[\"12cR2\",\"BI\",\"Cluster\",\"clustering\",\"DWH\",\"features\",\"Move\",\"online\",\"Oracle\",\"Oracle 12c\",\"Partition\",\"reorg\",\"SPLIT\"],\"articleSection\":[\"Cloud\",\"Database Administration &amp; Monitoring\",\"Oracle\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/\",\"name\":\"Oracle 12c - reorg and split table with clustering - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2020-06-10T02:49:58+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Oracle 12c &#8211; reorg and split table with clustering\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\",\"name\":\"Oracle Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"caption\":\"Oracle Team\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Oracle 12c - reorg and split table with clustering - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/","og_locale":"en_US","og_type":"article","og_title":"Oracle 12c - reorg and split table with clustering","og_description":"By Franck Pachot . In this series of small examples on recent features, I have imported in a previous post, the statistics of covid-19 per day and per countries. This is typical of data that comes as a time-series ordered by date, because this is how it is generated day after day, but where you [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/","og_site_name":"dbi Blog","article_published_time":"2020-06-10T02:49:58+00:00","author":"Oracle Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Oracle Team","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/"},"author":{"name":"Oracle Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"headline":"Oracle 12c &#8211; reorg and split table with clustering","datePublished":"2020-06-10T02:49:58+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/"},"wordCount":1551,"commentCount":0,"keywords":["12cR2","BI","Cluster","clustering","DWH","features","Move","online","Oracle","Oracle 12c","Partition","reorg","SPLIT"],"articleSection":["Cloud","Database Administration &amp; Monitoring","Oracle"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/","url":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/","name":"Oracle 12c - reorg and split table with clustering - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2020-06-10T02:49:58+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/oracle-12c-clustering\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Oracle 12c &#8211; reorg and split table with clustering"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee","name":"Oracle Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","caption":"Oracle Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/14280","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=14280"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/14280\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=14280"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=14280"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=14280"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=14280"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}