Oracle Blockchain Tables are available now with Oracle 19.10. (see Connor’s Blog on it), they are part of all editions and do not need any specific license. I.e. whenever we need to store data in a table, which should never be updated anymore and we have to ensure data cannot be tampererd, then blockchain tables should be considered as an option. As Oracle writes in the documentation that blockchain tables could e.g. be used for “audit trails”, I thought to test them by archiving unified audit trail data. Let me share my experience:

First of all I setup a 19c-database so that it supports blockchain tables:

– Installed 19.10.0.0.210119 (patch 32218454)
– Set COMAPTIBLE=19.10.0 and restarted the DB
– Installed patch 32431413

REMARK: All tests I’ve done with 19c have been done with Oracle 21c on the Oracle Cloud as well to verify that results are not caused by the backport of blockchain tables to 19c.

Creating the BLOCKCHAIN TABLE:

Blockchain Tables do not support the Create Table as Select-syntax:


create blockchain table uat_copy_blockchain2
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data
as select * from unified_audit_trail;

ERROR at line 6:
ORA-05715: operation not allowed on the blockchain table

I.e. I have to pre-create the blockain table and insert with “insert… select”:


CREATE blockchain TABLE uat_copy_blockchain 
   ("AUDIT_TYPE" VARCHAR2(64),
	"SESSIONID" NUMBER,
	"PROXY_SESSIONID" NUMBER,
	"OS_USERNAME" VARCHAR2(128),
...
	"DIRECT_PATH_NUM_COLUMNS_LOADED" NUMBER,
	"RLS_INFO" CLOB,
	"KSACL_USER_NAME" VARCHAR2(128),
	"KSACL_SERVICE_NAME" VARCHAR2(512),
	"KSACL_SOURCE_LOCATION" VARCHAR2(48),
	"PROTOCOL_SESSION_ID" NUMBER,
	"PROTOCOL_RETURN_CODE" NUMBER,
	"PROTOCOL_ACTION_NAME" VARCHAR2(32),
	"PROTOCOL_USERHOST" VARCHAR2(128),
	"PROTOCOL_MESSAGE" VARCHAR2(4000)
   )
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data;

Table created.

Now load the data into the blockchain table:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

Elapsed: 00:00:07.24
SQL> commit;

Commit complete.

Elapsed: 00:00:43.26

Over 43 seconds for the COMMIT!!!

The reason for the long COMMIT-time is that the blockchain (or better the row-chain of hashes for the 26526 rows) is actually built when committing. I.e. all blockchain related columns in the table are empty after the insert, before the commit:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  and ORABCTAB_CHAIN_ID$ is NULL
  4  and ORABCTAB_SEQ_NUM$ is NULL
  5  and ORABCTAB_CREATION_TIME$ is NULL
  6  and ORABCTAB_USER_NUMBER$ is NULL
  7  and ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
     26526

During the commit those hidden columns are updated:


SQL> commit;

Commit complete.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  or ORABCTAB_CHAIN_ID$ is NULL
  4  or ORABCTAB_SEQ_NUM$ is NULL
  5  or ORABCTAB_CREATION_TIME$ is NULL
  6  or ORABCTAB_USER_NUMBER$ is NULL
  7  or ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
         0

When doing a SQL-Trace I can see the following recursive statements during the COMMIT:


SQL ID: 6r4qu6xnvb3nt Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_inst_id$ = :1,
  orabctab_chain_id$ = :2, orabctab_seq_num$ = :3, orabctab_user_number$ = :4,
   ORABCTAB_CREATION_TIME$ = :5
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.56       0.55          0          0          0           0
Execute  26526     10.81      12.21       3824       3395      49546       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052     11.38      12.76       3824       3395      49546       26526

********************************************************************************

SQL ID: 4hc26wpgb5tqr Plan Hash: 2019081831

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      9.29      10.12        512      26533      27822       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    26527      9.29      10.12        512      26533      27822       26526

********************************************************************************

SQL ID: 2t5ypzqub0g35 Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_hash$ = :1
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.58       0.57          0          0          0           0
Execute  26526      6.79       7.27       1832       2896      46857       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052      7.37       7.85       1832       2896      46857       26526

********************************************************************************

SQL ID: bvggpqdp5u4uf Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26527      5.34       5.51          0          0          0           0
Fetch    26527      0.75       0.72          0      53053          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53055      6.10       6.24          0      53053          0       26526

********************************************************************************

SQL ID: dktp4suj3mn0t Plan Hash: 4188997816

SELECT  "AUDIT_TYPE",  "SESSIONID",  "PROXY_SESSIONID",  "OS_USERNAME",
  "USERHOST",  "TERMINAL",  "INSTANCE_ID",  "DBID",  "AUTHENTICATION_TYPE",
  "DBUSERNAME",  "DBPROXY_USERNAME",  "EXTERNAL_USERID",  "GLOBAL_USERID",
  "CLIENT_PROGRAM_NAME",  "DBLINK_INFO",  "XS_USER_NAME",  "XS_SESSIONID",
  "ENTRY_ID",  "STATEMENT_ID",  "EVENT_TIMESTAMP",  "EVENT_TIMESTAMP_UTC",
  "ACTION_NAME",  "RETURN_CODE",  "OS_PROCESS",  "TRANSACTION_ID",  "SCN",
  "EXECUTION_ID",  "OBJECT_SCHEMA",  "OBJECT_NAME",  "SQL_TEXT",  "SQL_BINDS",
    "APPLICATION_CONTEXTS",  "CLIENT_IDENTIFIER",  "NEW_SCHEMA",  "NEW_NAME",
   "OBJECT_EDITION",  "SYSTEM_PRIVILEGE_USED",  "SYSTEM_PRIVILEGE",
  "AUDIT_OPTION",  "OBJECT_PRIVILEGES",  "ROLE",  "TARGET_USER",
  "EXCLUDED_USER",  "EXCLUDED_SCHEMA",  "EXCLUDED_OBJECT",  "CURRENT_USER",
  "ADDITIONAL_INFO",  "UNIFIED_AUDIT_POLICIES",  "FGA_POLICY_NAME",
  "XS_INACTIVITY_TIMEOUT",  "XS_ENTITY_TYPE",  "XS_TARGET_PRINCIPAL_NAME",
  "XS_PROXY_USER_NAME",  "XS_DATASEC_POLICY_NAME",  "XS_SCHEMA_NAME",
  "XS_CALLBACK_EVENT_TYPE",  "XS_PACKAGE_NAME",  "XS_PROCEDURE_NAME",
  "XS_ENABLED_ROLE",  "XS_COOKIE",  "XS_NS_NAME",  "XS_NS_ATTRIBUTE",
  "XS_NS_ATTRIBUTE_OLD_VAL",  "XS_NS_ATTRIBUTE_NEW_VAL",  "DV_ACTION_CODE",
  "DV_ACTION_NAME",  "DV_EXTENDED_ACTION_CODE",  "DV_GRANTEE",
  "DV_RETURN_CODE",  "DV_ACTION_OBJECT_NAME",  "DV_RULE_SET_NAME",
  "DV_COMMENT",  "DV_FACTOR_CONTEXT",  "DV_OBJECT_STATUS",  "OLS_POLICY_NAME",
    "OLS_GRANTEE",  "OLS_MAX_READ_LABEL",  "OLS_MAX_WRITE_LABEL",
  "OLS_MIN_WRITE_LABEL",  "OLS_PRIVILEGES_GRANTED",  "OLS_PROGRAM_UNIT_NAME",
   "OLS_PRIVILEGES_USED",  "OLS_STRING_LABEL",  "OLS_LABEL_COMPONENT_TYPE",
  "OLS_LABEL_COMPONENT_NAME",  "OLS_PARENT_GROUP_NAME",  "OLS_OLD_VALUE",
  "OLS_NEW_VALUE",  "RMAN_SESSION_RECID",  "RMAN_SESSION_STAMP",
  "RMAN_OPERATION",  "RMAN_OBJECT_TYPE",  "RMAN_DEVICE_TYPE",
  "DP_TEXT_PARAMETERS1",  "DP_BOOLEAN_PARAMETERS1",
  "DIRECT_PATH_NUM_COLUMNS_LOADED",  "RLS_INFO",  "KSACL_USER_NAME",
  "KSACL_SERVICE_NAME",  "KSACL_SOURCE_LOCATION",  "PROTOCOL_SESSION_ID",
  "PROTOCOL_RETURN_CODE",  "PROTOCOL_ACTION_NAME",  "PROTOCOL_USERHOST",
  "PROTOCOL_MESSAGE",  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",
  "ORABCTAB_SEQ_NUM$",  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",
  "ORABCTAB_HASH$",  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.85       3.84          0          0          0           0
Fetch    26526      1.31       1.31          0      28120          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      5.17       5.15          0      28120          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

I.e. for every row inserted in the transaction, several recursive statements have to be executed to compute and update the inserted rows to link them together through the hash chain.

That raises the question if I should take care of PCTFREE when creating the blockchain table to avoid row migration (often wrongly called row chaining).

As with normal tables, blockchain tables have a default of 10% for PCTFREE:


SQL> select pct_free from tabs where table_name='UAT_COPY_BLOCKCHAIN';

  PCT_FREE
----------
        10

Do we actually have migrated rows after the commit?


SQL> @?/rdbms/admin/utlchain

Table created.

SQL> analyze table uat_copy_blockchain list chained rows;

Table analyzed.

SQL> select count(*) from chained_rows;

  COUNT(*)
----------
      7298

SQL> select count(distinct dbms_rowid.rowid_relative_fno(rowid)||'_'||dbms_rowid.rowid_block_number(rowid)) blocks_with_rows
  2  from uat_copy_blockchain;

BLOCKS_WITH_ROWS
----------------
	    1084

So it makes sense to adjust the PCTFREE. In my case best would be something like 25-30%, because the blockchain date makes around 23% of the average row length:


SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN';

SUM(AVG_COL_LEN)
----------------
	     401

SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN'
  2  and column_name like 'ORABCTAB%';

SUM(AVG_COL_LEN)
----------------
	      92

SQL> select (92/401)*100 from dual;

(92/401)*100
------------
  22.9426434

I could reduce the commit-time by 5 secs by adjusting the PCTFREE to 30.

But coming back to the commit-time issue:

This can easily be tested by just chekcing how much the commit-time increases when more data is loaded per transaction. Here the test done on 21c on the Oracle Cloud:


SQL> create blockchain table test_block_chain (a number, b varchar2(100), c varchar2(100))
  2  no drop until 0 days idle
  3  no delete until 31 days after insert
  4  hashing using "sha2_512" version v1;

Table created.

SQL> set timing on
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 1000;

999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:00.82
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 2000;

1999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:01.56
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 4000;

3999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:03.03
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 8000;

7999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:06.38
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 16000;

15999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:11.71

I.e. the more data inserted, the longer the commit-times. The times go up almost linearly with the amount of data inserted per transaction.

Can we gain something here by doing things in parallel? A commit-statement cannot be parallelized, but you may of course split your e.g. 24000 rows insert into 2 x 12000 rows inserts and run them in parallel and commit them at the same time. I created 2 simple scripts for that:


[email protected]:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct.bash 
#!/bin/bash

ROWS_TO_LOAD=$1

sqlplus -S cbleile/${MY_PASSWD}@pdb1 <<EOF
insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum <= $ROWS_TO_LOAD ;
-- alter session set events '10046 trace name context forever, level 12';
set timing on
commit;
-- alter session set events '10046 trace name context off';
exit
EOF

exit 0

[email protected]:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct_parallel.bash 
#!/bin/bash

PARALLELISM=$1
LOAD_ROWS=$2

for i in $(seq ${PARALLELISM})
do
  ./load_bct.bash $LOAD_ROWS &
done
wait

exit 0

Loading 4000 Rows in a single job:


[email protected]:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 1 4000

4000 rows created.


Commit complete.

Elapsed: 00:00:03.56

Loading 4000 Rows in 2 jobs, which run in parallel and each loading 2000 rows:


[email protected]:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 2 2000

2000 rows created.


2000 rows created.


Commit complete.

Elapsed: 00:00:17.87

Commit complete.

Elapsed: 00:00:18.10

That doesn’t scale at all. Enabling SQL-Trace for the 2 jobs in parallel showed this:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      8.41       8.58          0    1759772       2088        2000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2001      8.41       8.58          0    1759772       2088        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=103 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=25 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=9 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                             108        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00
********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      0.55       0.55          0          0          0           0
Fetch     2000      7.39       7.52          0    1758556          0        2000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      7.95       8.08          0    1758556          0        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                              80        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00

The single job for above 2 statements contained the following:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.76       1.85          0       8001       4140        4000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      1.76       1.85          0       8001       4140        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=102 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=26 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=0 card=1)(object id 11132)

********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.09       1.09          0          0          0           0
Fetch     4000      0.06       0.06          0       8000          0        4000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     8001      1.15       1.16          0       8000          0        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)

I.e. there’s a massive difference in logical IOs and I could see in the trace that the SQLs became slower with each execution.

Summary: Blockchain Tables are a great technology, but as with any other technology you should know its limitations. There is an overhead when committing and inserting into such tables in parallel sesssions does currently not scale when committing. If you test blockchain tables then I do recommend to review your PCT-FREE-setting of the blockchain table to avoid row migration.


Thumbnail [60x60]
by
Clemens Bleile