One of the challenge of all PL/SQL developers is to simulate the Production activity in a Non Prod. environment like for example different Insert executed by several sessions.

Different tools exist like Oracle RAT (Real Application Testing) but under license or you can create your own PL/SQL package using DBMS_SCHEDULER or DBMS_PARALLEL_EXECUTE packages.

The aim of this blog is to show you how to use DBMS_PARALLEL_EXECUTE to parallelize several INSERTS commands through different sessions.

My source to write this blog is : oracle-base and oracle documentation.

My goal is to Insert 3000 rows into the table DBI_FK_NOPART through different sessions in parallel.

First of all, let’s check the MAX primary key into the table:

select max(pkey) from XXXX.dbi_fk_nopart;
MAX(PKEY)
9900038489

For my test I have created and populated a new table test_tab as specified into oracle-base which will allow to create the chunks used to create the different parallel sessions. In my case, we will create 3 chunks:

SELECT DISTINCT num_col, num_col FROM test_tab;
num_col num_col1
10      10
30      30
20      20

The following code below must be written into a PL/SQL block or a PL/SQL procedure, I just copy the main command:

The first step is to create a new task:

DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');

We split the data into 3 chunks:

--We create 3 chunks
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT DISTINCT num_col, num_col FROM test_tab', by_rowid => false); 

Now I want to Insert 1000 rows for each chunk which will correspond to different session. So at the end I will have 3000 rows inserted through different sessions.

Add a dynamic PL/SQL block to execute the Insert :

v_sql_stmt := 'declare
s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
table_name varchar2(30);
v_pkey number;
begin
EXECUTE IMMEDIATE ''SELECT max(pkey) FROM xxxx.DBI_FK_NOPART'' INTO v_pkey;
for rec in 1..1000 loop
s:=''INSERT /*TEST_INSERT_DBI_FK_NOPART*/ INTO xxxx.DBI_FK_NOPART ( 
pkey,
boid,
metabo,
lastupdate,
processid,
rowcomment,
created,
createduser,
replaced,
replaceduser,
archivetag,
mdbid,
itsforecast,
betrag,
itsopdetherkunft,
itsopdethkerstprm,
itsfckomppreisseq,
clsfckomppreisseq,
issummandendpreis,
partitiontag,
partitiondomain,
fcvprodkomppkey,
fckvprdankomppkey,
session_id
) VALUES (
1 +'||v_pkey||' ,
''''8189b7c7-0c36-485b-8993-054dddd62708'''' ,
-695,
sysdate,
''''B.3142'''' ,
NULL,
SYSDATE,
''''svc_xxxx_Mig_DEV_DBITEST'''' ,
SYSDATE,
NULL,
NULL,
NULL,
''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' ,
0,
''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' ,
NULL,
''''5849f308-215b-486b-95bd-cbd7afe8440H'''',  
-251,
0,
201905,
''''E'''',  
:start_id,
:end_id,
SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))'';
execute immediate s using vstart_id, vend_id;
commit;
end loop;
end;';

The next step is to execute the TASK with parallel_level = 4 meaning I want to insert the rows through 4 differents sessions.

DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',   sql_stmt =>v_sql_stmt,   language_flag => DBMS_SQL.NATIVE, parallel_level => 4 );

Let’s check the TASK execution status:

SELECT task_name,status FROM user_parallel_execute_tasks;
TASK_NAME STATUS
TASK_NAME FINISHED

And let’s check the chunks created, we should have 3 chunks:

SELECT chunk_id, status, start_id, end_id FROM   user_parallel_execute_chunks WHERE  task_name = 'TASK_NAME' ORDER BY chunk_id;
CHUNK_ID STATUS     START_ID END_ID
9926    PROCESSED   10  10
9927    PROCESSED   30  30
9928    PROCESSED   20  20

As we have used the parameter parallel_level=4, we should have 4 different jobs using 4 differents sessions :

SELECT log_date,job_name, status FROM   user_scheduler_job_run_details WHERE  job_name LIKE 'TASK$%' order by log_date desc;
LOG_DATE                            JOB_NAME        STATUS      SESSION_ID
29.12.21 14:38:41.882995000 +01:00  TASK$_22362_3   SUCCEEDED   3152,27076
29.12.21 14:38:41.766619000 +01:00  TASK$_22362_2   SUCCEEDED   14389,25264
29.12.21 14:38:41.657571000 +01:00  TASK$_22362_1   SUCCEEDED   3143,9335
29.12.21 14:38:41.588968000 +01:00  TASK$_22362_4   SUCCEEDED   6903,60912

Now let’s check the MAX primary key into the table :

select max(pkey) from xxxx.dbi_fk_nopart;
MAX(PKEY)
9900041489
select 9900041489 - 9900038489 from dual;
3000

3000 rows has been inserted, and the data has been splitted by chunks of 1000 rows per session:

select count(*),session_id from xxxx.dbi_fk_nopart where pkey >  9900041489 group by session_id;
count(*) session_id
1000    4174522508
1000    539738149
1000    4190321565

Conclusion :

DBMS_PARALLEL_EXECUTE is easy to use, performing and has many options :

  • Data can be splitted by ROWID by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID
  • Data can be splitted on a number column by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_NUMBER_COL
  • Data can be splitted on a user defined query by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (used in this blog)