{"id":4995,"date":"2015-06-23T08:57:00","date_gmt":"2015-06-23T06:57:00","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/"},"modified":"2015-06-23T08:57:00","modified_gmt":"2015-06-23T06:57:00","slug":"do-the-block-size-matter","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/","title":{"rendered":"Does the block size matter?"},"content":{"rendered":"<h2>By Franck Pachot<\/h2>\n<p>.<br \/>\nThe default block size is 8k in Oracle. This is the most common. Sometimes, people create the database with 16k block size for datawarehouses. You can also find some 32k block size, but less common which means more bugs. In our <a href=\"index.php\/products\/dbi-insite-workshops\/oracle-performance-tuning-training\">tuning workshop<\/a> we recommend the default, 8k, for most of the cases. I&#8217;ll show here 3 tests to show what different block size change for full table scan and index access.<\/p>\n<h3>Test case<\/h3>\n<p>I have defined a cache size for the non default block size I want to use:<\/p>\n<pre><code>SQL&gt; show parameter db%_cache_size\n&nbsp;\nNAME                                 TYPE        VALUE\n------------------------------------ ----------- -----\ndb_cache_size big integer 0\ndb_2k_cache_size                     big integer 0\ndb_4k_cache_size                     big integer 0\ndb_8k_cache_size                     big integer 0\ndb_16k_cache_size big integer 112M\ndb_32k_cache_size big integer 112M\ndb_flash_cache_size                  big integer 0\ndb_keep_cache_size                   big integer 0\ndb_recycle_cache_size                big integer 0\n<\/code><\/pre>\n<p>I&#8217;m creating 3 tablespaces with 8k, 16k and 32k block size. I create them as uniform with a 1M extent size because we are supposed to use large block size for large tables and I don&#8217;t want the side effects of smaller first extents in auto extent size.<\/p>\n<pre><code>SQL&gt; create tablespace DEMO08K datafile '\/oracle\/u01\/oradata\/DEMO08K.dbf' size 1024M extent management local uniform size 1M blocksize 8k;\nTablespace created.\n&nbsp;\nSQL&gt; create tablespace DEMO16K datafile '\/oracle\/u01\/oradata\/DEMO16K.dbf' size 1024M extent management local uniform size 1M blocksize 16k;\nTablespace created.\n&nbsp;\nSQL&gt; create tablespace DEMO32K datafile '\/oracle\/u01\/oradata\/DEMO32K.dbf' size 1024M extent management local uniform size 1M blocksize 32k;\nTablespace created.\n<\/code><\/pre>\n<p>and then create 3 identical tables in each tablespace:<\/p>\n<pre><code>SQL&gt; create table TAB08K (id constraint PK_08K primary key,n,x) tablespace DEMO08K as\n     select rownum id , ora_hash(rownum,10) n , lpad(ora_hash(rownum,100),20,'0') x from xmltable('1 to 10000000');\nTable created.\n&nbsp;\nSQL&gt; create table TAB16K (id constraint PK_16K primary key,n,x) tablespace DEMO16K as\n     select rownum id , ora_hash(rownum,10) n , lpad(ora_hash(rownum,100),20,'0') x from xmltable('1 to 10000000');\nTable created.\n&nbsp;\nSQL&gt; create table TAB32K (id constraint PK_32K primary key,n,x) tablespace DEMO32K as\n     select rownum id , ora_hash(rownum,10) n , lpad(ora_hash(rownum,100),20,'0') x from xmltable('1 to 10000000');\nTable created.\n<\/code><\/pre>\n<p>My tables have 10 million rows, two number column and one larger varchar2:<\/p>\n<pre><code>SQL&gt; select table_name,num_rows,avg_row_len,blocks from user_tables where table_name like 'TAB__K';\n&nbsp;\nTABLE_NAME   NUM_ROWS AVG_ROW_LEN     BLOCKS\n---------- ---------- ----------- ----------\nTAB08K       10000000          30      48459\nTAB16K       10000000          30      23997\nTAB32K       10000000          30      11933\n<\/code><\/pre>\n<p>Of course, larger block size need smaller number of blocks, but the total size is roughly the same. Here I have small rows so this is where the fixed size of block header can make the most difference.<\/p>\n<p>&nbsp;<\/p>\n<h3>Full Table Scan<\/h3>\n<p>So, the common idea is that larger block size helps to do larger i\/o calls when doing full table scan&#8230;<\/p>\n<pre><code>SQL&gt; set timing on arraysize 5000 autotrace trace\n&nbsp;\nSQL&gt; select * from TAB08K;\n10000000 rows selected.\n&nbsp;\nElapsed: 00:00:32.53\n&nbsp;\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 1209268626\n&nbsp;\n-------------------------------------------------------------------------\n| Id  | Operation                 | Name   | Rows  | Bytes | Cost (%CPU)|\n-------------------------------------------------------------------------\n|   0 | SELECT STATEMENT          |        |    10M|   286M|  8462   (1)|\n|   1 |  TABLE ACCESS STORAGE FULL| TAB08K |    10M|   286M|  8462   (1)|\n-------------------------------------------------------------------------\n&nbsp;\nStatistics\n----------------------------------------------------------\n          5  recursive calls\n          0  db block gets\n      50174  consistent gets\n      48175  physical reads\n          0  redo size\n  348174002  bytes sent via SQL*Net to client\n      22489  bytes received via SQL*Net from client\n       2001  SQL*Net roundtrips to\/from client\n          0  sorts (memory)\n          0  sorts (disk)\n   10000000  rows processed\n&nbsp;\nSQL&gt; set autotrace off\nSQL&gt; select name,value from v$mystat join v$statname using(statistic#) where name like 'physical read%' and value&gt;0;\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                         403\nphysical read total multi block requests                                379\nphysical read total bytes                                         394821632\nphysical reads                                                        48196\nphysical reads cache                                                     23\nphysical reads direct                                                 48173\nphysical read IO requests                                               403\nphysical read bytes                                               394821632\nphysical reads direct temporary tablespace                                1\n<\/code><\/pre>\n<p>I&#8217;ve read 48175 8k blocks with 403 i\/o calls.<\/p>\n<p>&nbsp;<\/p>\n<p>Now doing the same from the table stored in the 16k blocksize tablespace:<\/p>\n<pre><code>SQL&gt; select * from TAB16K;\n&nbsp;\n10000000 rows selected.\n&nbsp;\nElapsed: 00:00:31.04\n&nbsp;\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 2288178481\n&nbsp;\n-------------------------------------------------------------------------\n| Id  | Operation                 | Name   | Rows  | Bytes | Cost (%CPU)|\n-------------------------------------------------------------------------\n|   0 | SELECT STATEMENT          |        |    10M|   286M|  4378   (2)|\n|   1 |  TABLE ACCESS STORAGE FULL| TAB16K |    10M|   286M|  4378   (2)|\n-------------------------------------------------------------------------\n&nbsp;\nSQL&gt; select name,value from v$mystat join v$statname using(statistic#) where name like 'physical read%' and value&gt;0;\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                         397\nphysical read total multi block requests                                375\nphysical read total bytes                                         391012352\nphysical reads                                                        23876\nphysical reads cache                                                     21\nphysical reads direct                                                 23855\nphysical read IO requests                                               397\nphysical read bytes                                               391012352\nphysical reads direct temporary tablespace                                1\n<\/code><\/pre>\n<p>I&#8217;ve read 23855 16k blocks with 397 i\/o calls. It&#8217;s not a lot better.<\/p>\n<pre><code>SQL&gt; select * from TAB32K;\n&nbsp;\n10000000 rows selected.\n&nbsp;\nElapsed: 00:00:29.61\n&nbsp;\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 1240330363\n&nbsp;\n-------------------------------------------------------------------------\n| Id  | Operation                 | Name   | Rows  | Bytes | Cost (%CPU)|\n-------------------------------------------------------------------------\n|   0 | SELECT STATEMENT          |        |    10M|   286M|  2364   (3)|\n|   1 |  TABLE ACCESS STORAGE FULL| TAB32K |    10M|   286M|  2364   (3)|\n-------------------------------------------------------------------------\n&nbsp;\nSQL&gt; select name,value from v$mystat join v$statname using(statistic#) where name like 'physical read%' and value&gt;0;\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                         398\nphysical read total multi block requests                                373\nphysical read total bytes                                         388890624\nphysical reads                                                        11886\nphysical reads cache                                                     24\nphysical reads direct                                                 11862\nphysical read IO requests                                               398\nphysical read bytes                                               388890624\nphysical reads direct temporary tablespace                                1\n<\/code><\/pre>\n<p>I&#8217;ve read 11892 32k blocks with 398 i\/o calls.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Conclusion<\/strong>: we do roughly the same amount of i\/o when doing a full table scan. This is because Oracle is reading in multiblock. Note that the db_file_multiblock_read_count is defined as the number of blocks of default block size, but the i\/o size is adapted for tablespace with non default block size. For example, db_file_multiblock_read_count=128 when default block size is 8k will always read 1MB.<\/p>\n<h3>Indexes<\/h3>\n<p>I already have an index on the primary key. Let&#8217;s add some more indexes:<\/p>\n<pre><code>SQL&gt; create index ID_08K on TAB08K(x) tablespace DEMO08K ;\nIndex created.\nSQL&gt; create index ID_16K on TAB16K(x) tablespace DEMO16K ;\nIndex created.\nSQL&gt; create index ID_32K on TAB32K(x) tablespace DEMO32K ;\nIndex created.\nSQL&gt; create bitmap index BI_08K on TAB08K(n) tablespace DEMO08K ;\nIndex created.\nSQL&gt; create bitmap index BI_16K on TAB16K(n) tablespace DEMO16K ;\nIndex created.\nSQL&gt; create bitmap index BI_32K on TAB32K(n) tablespace DEMO32K ;\nIndex created.\n<\/code><\/pre>\n<p>and check their size:<\/p>\n<pre><code>SQL&gt; select index_name,num_rows,blevel,leaf_blocks from user_indexes where table_name like 'TAB__K' order by 1;\n&nbsp;\nINDEX_NAME   NUM_ROWS     BLEVEL LEAF_BLOCKS\n---------- ---------- ---------- -----------\nBI_08K           3211          2        1606\nBI_16K           1562          1         781\nBI_32K            759          1         380\nID_08K       10000000          2       44643\nID_16K       10000000          2       22027\nID_32K       10000000          2       10929\nPK_08K       10000000          2       22132\nPK_16K       10000000          2       10921\nPK_32K       10000000          2        5425\n<\/code><\/pre>\n<p>Of course the number of blocks is lower when the block size is bigger. And because branches are larger, then you may have a smaller depth. But look: on my 10000000 rows table the depth is the same for the regular indexes: 2 branch levels. Only for the bitmap indexes, because they are very small, we need one less branch level here.<\/p>\n<p>&nbsp;<\/p>\n<p>But think about it. Index depth mostly matter for OLTP where you get rows by their primary key. But people say that smaller blocks are better for OLTP&#8230; Datawarehouses often have bitmap indexes, but do you care to have smaller bitmap indexes?<\/p>\n<h3>Index access (lot of rows, good clustering factor)<\/h3>\n<p>Anyway, let&#8217;s test a large range scan:<\/p>\n<pre><code>SQL&gt; select * from TAB08K where id between 1 and 100000;\n&nbsp;\n100000 rows selected.\n&nbsp;\nElapsed: 00:00:00.44\n&nbsp;\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 2790916815\n&nbsp;\n-----------------------------------------------------------------------------------\n| Id  | Operation                           | Name   | Rows  | Bytes | Cost (%CPU)|\n-----------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT                    |        |   100K|  2929K|   707   (1)|\n|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TAB08K |   100K|  2929K|   707   (1)|\n|*  2 |   INDEX RANGE SCAN                  | PK_08K |   100K|       |   225   (1)|\n-----------------------------------------------------------------------------------\n&nbsp;\nPredicate Information (identified by operation id):\n---------------------------------------------------\n&nbsp;\n   2 - access(\"ID\"&gt;=1 AND \"ID\" v=100000)\n&nbsp;\nStatistics\n----------------------------------------------------------\n          1  recursive calls\n          0  db block gets\n        878  consistent gets\n        679  physical reads\n          0  redo size\n    3389860  bytes sent via SQL*Net to client\n       1589  bytes received via SQL*Net from client\n        101  SQL*Net roundtrips to\/from client\n          0  sorts (memory)\n          0  sorts (disk)\n     100000  rows processed\n&nbsp;\nSQL&gt; set autotrace off\nSQL&gt; select name,value from v$mystat join v$statname using(statistic#) where name like 'physical read%' and value&gt;0;\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                         705\nphysical read total bytes                                           5775360\nphysical reads                                                          705\nphysical reads cache                                                    705\nphysical read IO requests                                               705\nphysical read bytes                                                 5775360\n<\/code><\/pre>\n<p>We have read 100000 rows through index. The index is very well clustered. I&#8217;ve done 705 i\/o calls to get those rows from 8k blocks.<\/p>\n<p>&nbsp;<\/p>\n<p>Now with 16k blocks:<\/p>\n<pre><code>SQL&gt; select * from TAB16K where id between 1 and 100000;\n&nbsp;\n100000 rows selected.\n&nbsp;\nElapsed: 00:00:00.37\n&nbsp;\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 1432239150\n&nbsp;\n-----------------------------------------------------------------------------------\n| Id  | Operation                           | Name   | Rows  | Bytes | Cost (%CPU)|\n-----------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT                    |        |   100K|  2929K|   352   (1)|\n|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TAB16K |   100K|  2929K|   352   (1)|\n|*  2 |   INDEX RANGE SCAN                  | PK_16K |   100K|       |   113   (1)|\n-----------------------------------------------------------------------------------\n&nbsp;\nPredicate Information (identified by operation id):\n---------------------------------------------------\n&nbsp;\n   2 - access(\"ID\"&gt;=1 AND \"ID\"v=100000)\n&nbsp;\nStatistics\n----------------------------------------------------------\n          1  recursive calls\n          0  db block gets\n        537  consistent gets\n        337  physical reads\n          0  redo size\n    3389860  bytes sent via SQL*Net to client\n       1589  bytes received via SQL*Net from client\n        101  SQL*Net roundtrips to\/from client\n          0  sorts (memory)\n          0  sorts (disk)\n     100000  rows processed\n&nbsp;\nSQL&gt; set autotrace off\nSQL&gt; select name,value from v$mystat join v$statname using(statistic#) where name like 'physical read%' and value&gt;0;\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                         363\nphysical read total bytes                                           5734400\nphysical reads                                                          363\nphysical reads cache                                                    363\nphysical read IO requests                                               363\nphysical read bytes                                                 5734400\n<\/code><\/pre>\n<p>the number of i\/o calls have been divided by two.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<pre><code>SQL&gt; select * from TAB32K where id between 1 and 100000;\n&nbsp;\n100000 rows selected.\n&nbsp;\nElapsed: 00:00:00.35\n&nbsp;\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 3074346038\n&nbsp;\n-----------------------------------------------------------------------------------\n| Id  | Operation                           | Name   | Rows  | Bytes | Cost (%CPU)|\n-----------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT                    |        |   100K|  2929K|   177   (1)|\n|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TAB32K |   100K|  2929K|   177   (1)|\n|*  2 |   INDEX RANGE SCAN                  | PK_32K |   100K|       |    58   (2)|\n-----------------------------------------------------------------------------------\n&nbsp;\nPredicate Information (identified by operation id):\n---------------------------------------------------\n&nbsp;\n   2 - access(\"ID\"&gt;=1 AND \"ID\"v=100000)\n&nbsp;\nStatistics\n----------------------------------------------------------\n          1  recursive calls\n          0  db block gets\n        369  consistent gets\n        169  physical reads\n          0  redo size\n    3389860  bytes sent via SQL*Net to client\n       1589  bytes received via SQL*Net from client\n        101  SQL*Net roundtrips to\/from client\n          0  sorts (memory)\n          0  sorts (disk)\n     100000  rows processed\n&nbsp;\nSQL&gt; set autotrace off\nSQL&gt; select name,value from v$mystat join v$statname using(statistic#) where name like 'physical read%' and value&gt;0;\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                         195\nphysical read total bytes                                           5750784\nphysical reads                                                          195\nphysical reads cache                                                    195\nphysical read IO requests                                               195\nphysical read bytes                                                 5750784\n<\/code><\/pre>\n<p>with 32k blocks, it&#8217;s once again divided by two.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Conclusion<\/strong>: when doing single block reads, coming from a well clustered index, we do less i\/o calls with larger blocks. The fact is that because we need contiguous rows (because we are using a well clustered index) having large blocks makes more rows physically contiguous.<\/p>\n<h3>Index access (few rows, bad clustering factor)<\/h3>\n<p>Here is a query WHERE X=&#8217;00000000000000000000&#8242;. The index on N &#8211; that I&#8217;ve populated with a hash value on rownum &#8211; has a bad clustering factor. I fetch only 30 rows.<\/p>\n<pre><code>----------------------------------------------------------------------------\n| Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)|\n----------------------------------------------------------------------------\n|   0 | SELECT STATEMENT             |        |    30 |   900 |    16   (0)|\n|   1 |  COUNT STOPKEY               |        |       |       |            |\n|   2 |   TABLE ACCESS BY INDEX ROWID| TAB08K |    30 |   900 |    16   (0)|\n|*  3 |    INDEX RANGE SCAN          | ID_08K | 99010 |       |     3   (0)|\n----------------------------------------------------------------------------\n&nbsp;\nPredicate Information (identified by operation id):\n---------------------------------------------------\n&nbsp;\n   3 - access(\"X\"='00000000000000000000')\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                          16\nphysical read total bytes                                            131072\nphysical reads                                                           16\nphysical reads cache                                                     16\nphysical read IO requests                                                16\nphysical read bytes                                                  131072\n&nbsp;\n<\/code><\/pre>\n<p>The query returned about 30 rows. Because of the bad clustering factor we had to read a block every two rows on average. Let&#8217;s see the same with 16k blocks.<\/p>\n<pre><code>---------------------------------------------------------------------------------------\n| Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |\n---------------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT             |        |    30 |   900 |     7   (0)| 00:00:01 |\n|   1 |  COUNT STOPKEY               |        |       |       |            |          |\n|   2 |   TABLE ACCESS BY INDEX ROWID| TAB32K |    30 |   900 |     7   (0)| 00:00:01 |\n|*  3 |    INDEX RANGE SCAN          | ID_32K | 99010 |       |     3   (0)| 00:00:01 |\n---------------------------------------------------------------------------------------\n&nbsp;\nPredicate Information (identified by operation id):\n---------------------------------------------------\n   3 - access(\"X\"='00000000000000000000')\n&nbsp;\nNAME                                                                  VALUE\n---------------------------------------------------------------- ----------\nphysical read total IO requests                                          33\nphysical read total bytes                                            442368\nphysical reads                                                           33\nphysical reads cache                                                     33\nphysical read IO requests                                                33\nphysical read bytes                                                  442368\n<\/code><\/pre>\n<p>More i\/o calls here and higher block size.<br \/>\n<strong>Conclusion<\/strong>: larger block size is bad when we need only few rows, especially from a badly clustered index. More i\/o calls, larger i\/o size, and the large blocks takes more space in the buffer cache.<\/p>\n<p>&nbsp;<\/p>\n<h3>So, which block size suits your workload?<\/h3>\n<p>Question is: do you get better performance in datawarhouse with larger block size? For full table scans, no it&#8217;s the same and has always been the same. For index range scans retrieving large number of rows, then yes, the access by index may be faster. But think about it. Retrieving lot of rows by index is always bad, &#8211; whatever the block size is. If you want do to something about it, look at the design: define better indexes (covering all selected columns), partition the table, compress the table, use Parallel Query,&#8230; Lot of tuning to do before thinking about block size.<\/p>\n<p>With smaller block size the optimizer will favor a full table scan for those cases, and today the gap between full table scan and index access is not so large. There is Exadata SmartScan, In-Memory. Even without options, multiblock read is faster since 11g when doing serial direct read. All that makes full table scan faster. And index access has also been improved: you have the batched access by rowid to compensate bad clustring factor.<\/p>\n<p>And if you think about having a non default block size for some tablespaces, are you sure that the gain you expect will not be ruined by a bad sizing of buffer cache? When having non default blocksize tablespace you have to manage their buffer cache manually.<\/p>\n<h3>Other considerations<\/h3>\n<p>Larger block size may have been recommended for very large tablespaces in order to avoid to have too many datafiles (their maximum size is in number of block) but today you can create bigfile tablespaces for them, so it is not a reason anymore.<\/p>\n<p>There is one reason to have larger block size. When you have very large rows, you can avoid row chaining. But once again, are you sure it is a problem (i.e do you select often the columns at the end)? And maybe you should review the design first.<\/p>\n<p>There was another reason to have a larger block size for tablespace containing large LOB (I know that the &#8216;L&#8217; is already for &#8216;Large&#8217; but I mean LOBs larger than the default block size). Today you should use SecureFiles and we get better performance with them. But that&#8217;s for another blog post.<\/p>\n<h3>Conclusion<\/h3>\n<p>When people come with those kinds of rules of thumbs, I usually try to see if it is something they thought about, or just a blind idea. For example, when they want to rebuild indexes, I ask them which PCTFREE they set for that. Because rebuilding without knowing the PCTFREE we want to achieve is pointless.<br \/>\nAnd when they want to create a tablespace with non default block size, I ask them how they have calculated the buffer cache to allocate for that blocksize. The whole size of the table? Then why do you want to do less i\/o calls if everything is in cache. A small size? Then are you sure that the lower number of i\/o calls will compensate the cache you can have in the default buffer pool?<\/p>\n<p>In most of the cases, just keep the default block size, and you have probably lot of other things to tune. I&#8217;ll now tell you a short story. I was a junior not-yet-DBA in 1998, implementing a banking software (COBOL, tuxedo, HP-UX, Oracle 7). Application design had a lot to review. Things like comments stored in CHAR(2000) for example. Then I&#8217;ve seen a senior consultant recommending &#8216;The&#8217; solution: increase the block size. The customer accepted that, we did it and everything was immediately better. Of course, in order to do that you have to export\/import the database, everything was reorganized, lot of design problems were hidden for a few days. After a while, the performance issues came back, and we had to continue the optimization tasks. Being on a larger block size did not change anything about that. This is where I learned exactly which kind of consultant I don&#8217;t want to be.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Franck Pachot . The default block size is 8k in Oracle. This is the most common. Sometimes, people create the database with 16k block size for datawarehouses. You can also find some 32k block size, but less common which means more bugs. In our tuning workshop we recommend the default, 8k, for most of [&hellip;]<\/p>\n","protected":false},"author":27,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[198,59],"tags":[96,67],"type_dbi":[],"class_list":["post-4995","post","type-post","status-publish","format-standard","hentry","category-database-management","category-oracle","tag-oracle","tag-performance"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Does the block size matter? - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Does the block size matter?\" \/>\n<meta property=\"og:description\" content=\"By Franck Pachot . The default block size is 8k in Oracle. This is the most common. Sometimes, people create the database with 16k block size for datawarehouses. You can also find some 32k block size, but less common which means more bugs. In our tuning workshop we recommend the default, 8k, for most of [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2015-06-23T06:57:00+00:00\" \/>\n<meta name=\"author\" content=\"Oracle Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Oracle Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"14 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/\"},\"author\":{\"name\":\"Oracle Team\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"headline\":\"Does the block size matter?\",\"datePublished\":\"2015-06-23T06:57:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/\"},\"wordCount\":1375,\"commentCount\":1,\"keywords\":[\"Oracle\",\"Performance\"],\"articleSection\":[\"Database management\",\"Oracle\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/\",\"name\":\"Does the block size matter? - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2015-06-23T06:57:00+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Does the block size matter?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\",\"name\":\"Oracle Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"caption\":\"Oracle Team\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Does the block size matter? - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/","og_locale":"en_US","og_type":"article","og_title":"Does the block size matter?","og_description":"By Franck Pachot . The default block size is 8k in Oracle. This is the most common. Sometimes, people create the database with 16k block size for datawarehouses. You can also find some 32k block size, but less common which means more bugs. In our tuning workshop we recommend the default, 8k, for most of [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/","og_site_name":"dbi Blog","article_published_time":"2015-06-23T06:57:00+00:00","author":"Oracle Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Oracle Team","Est. reading time":"14 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/"},"author":{"name":"Oracle Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"headline":"Does the block size matter?","datePublished":"2015-06-23T06:57:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/"},"wordCount":1375,"commentCount":1,"keywords":["Oracle","Performance"],"articleSection":["Database management","Oracle"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/","url":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/","name":"Does the block size matter? - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2015-06-23T06:57:00+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/do-the-block-size-matter\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Does the block size matter?"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee","name":"Oracle Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","caption":"Oracle Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/4995","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=4995"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/4995\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=4995"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=4995"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=4995"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=4995"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}