Introduction

When it comes to performance analysis on Oracle databases, everybody think about Diagnostic Pack. This is a very complete tool and it brings much more metrics than a DBA can analyze. But there is a big drawback with this tool: it’s not included in Standard Edition and it’s an option with Enterprise Edition. If you don’t have this tool, it’s still possible to do performance analysis using older Statspack tool. Statspack is nothing else than the father of Diagnostic Pack, and it still comes with all database versions free of charge. You just need to set it up correctly, and it will help a lot diagnosing performance troubles even if you’re using a modern environment, I mean 19c and Multitenant. For sure, there is some limitations compared to Diagnostic Pack, like text only reports, less metrics and truncated SQL statements, but it’s powerful enough for a good DBA willing to check and improve its databases’ performance.

Does Statspack works correctly with 19c and Multitenant?

Yes it does. You just need to use a specific setup procedure. Statspack should be deployed at the PDB level if you’re using Multitenant. And jobs must be configured using dbms_scheduler, as 19c does not support anymore old fashioned dbms_job. Here is the procedure I use when I deploy Statspack on a modern environment.

Setup procedure

Before configuring Statspack, make sure you’re using Standard Edition or make sure you’re not using Diagnostic Pack on your Enterprise Edition database. Both tools are not supposed to work together, and there is absolutely no use to have both. Diagnostic Pack, and eventually Tuning Pack are enabled/disabled with an instance parameter.

-- Check if DIAGNOSTIC and TUNING pack are disabled at the CDB level
SQL> conn / as sysdba
SQL> sho parameter control_mana

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
control_management_pack_access	     string	 NONE

-- Remove DIAGNOSTIC and TUNING packs if needed
alter system set control_management_pack_access='NONE' scope=spfile;

Starting from now, if you’re using multitenant, commands are done on the PDB:

export ORACLE_PDB_SID=MYPDB;
sqlplus / as sysdba

Statspack should use its own tablespace, 2/3GB should normally be OK.

-- Create a dedicated tablespace if needed
create tablespace PERFSTAT datafile size 300M autoextend on maxsize 3G;

In case Statspack was previously deployed, you can remove it to do a clean installation:

-- Remove previously installed statspack if needed
@?/rdbms/admin/spdrop

Statspack setup will need a default tablespace (the one you just created), a temporary tablespace and a password for its user PERFSTAT (prompt):

-- Create statspack's user and objects
define default_tablespace='perfstat'
define temporary_tablespace='temp'
@?/rdbms/admin/spcreate

You should configure an increased metrics level:

-- Increase level of metrics
exec STATSPACK.MODIFY_STATSPACK_PARAMETER (i_snap_level=>7,i_modify_parameter=>'true', i_instance_number=>null);

Statspack will need 2 jobs, 1 for snapshots (a snapshot is flushing actual performance metrics into dedicated tables) and 1 for snapshots’ purge (no need to keep old performance metrics).

Let’s allow the user PERFSTAT to create jobs:

-- Allow PERFSTAT's user to create jobs
conn / as sysdba
grant create job to perfstat;
conn perfstat/perfstat
show user

Let’s stop and drop existing jobs for snapshots if needed:

-- Drop existing snapshot job if needed
Exec dbms_scheduler.stop_job('SP_SNAP_JOB');
Exec dbms_scheduler.drop_job('SP_SNAP_JOB');
Exec dbms_scheduler.drop_schedule('SP_SNAP_SCHED');
Exec dbms_scheduler.drop_program('SP_SNAP_PROG');

And now let’s create snapshots scheduling components. In this example, my choice is one snapshot every 15 minutes. I would not recommend more often, and less often like every hour is also OK.

-- Create snapshot job using scheduler - every fifteen minutes
exec dbms_scheduler.create_program(program_name => 'SP_SNAP_PROG', program_type => 'STORED_PROCEDURE', program_action => 'PERFSTAT.statspack.snap', number_of_arguments => 0, enabled => FALSE);
exec dbms_scheduler.enable(name => 'SP_SNAP_PROG');
exec dbms_scheduler.create_schedule (schedule_name => 'SP_SNAP_SCHED', repeat_interval => 'freq=hourly; byminute=0,15,30,45; bysecond=0',end_date => null, comments => 'Schedule for Statspack snaps');
exec dbms_scheduler.create_job (job_name => 'SP_SNAP_JOB', program_name => 'SP_SNAP_PROG', schedule_name => 'SP_SNAP_SCHED',  enabled => TRUE, auto_drop => FALSE, comments => 'Statspack Job for snaps');

Let’s check the scheduling of this job:

-- Check job and scheduling
set lines 140
col owner for a10
col job_name for a15
col program_name for a20
col first for a18
col next for a18
col last for a18
select OWNER, JOB_NAME, PROGRAM_NAME, to_char(START_DATE,'YYYY/MM/DD HH24:MI') "FIRST", to_char(NEXT_RUN_DATE,'YYYY/MM/DD HH24:MI') "NEXT", to_char(LAST_START_DATE,'YYYY/MM/DD HH24:MI') "LAST" from dba_scheduler_jobs where owner='PERFSTAT';


OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		 NEXT		  LAST
---------- --------------- -------------------- ---------------- ---------------- ----------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45 2021/12/24 14:45

Let’s drop the purge job if needed:

-- Drop existing purge job if needed
Exec dbms_scheduler.stop_job('SP_PURGE_JOB');
Exec dbms_scheduler.drop_job('SP_PURGE_JOB');
Exec dbms_scheduler.drop_schedule('SP_PURGE_SCHED');
Exec dbms_scheduler.drop_program('SP_PURGE_PROG');

Create the purge job scheduling. Purge is done weekly, during the night between Saturday and Sunday. In this example, I’m using a 30 day retention for snaphots. I would recommend a retention between 10 days and 2 months. Even if you don’t need a long retention, it’s always nice being able to compare performance metrics the days/weeks before a performance issue.

-- Create a procedure for the purge
create or replace procedure extended_purge(
num_days IN number
)
is
BEGIN
  statspack.purge(i_num_days => num_days, i_extended_purge => TRUE);
END extended_purge;
/

-- Test this procedure if needed
-- exec extended_purge(30);


-- Create snapshot job using scheduler - every Sunday at 0:20AM - keep 30 days of snapshots
exec dbms_scheduler.create_program(program_name => 'SP_PURGE_PROG', program_type => 'STORED_PROCEDURE', program_action => 'PERFSTAT.extended_purge', number_of_arguments => 1, enabled => FALSE);
exec DBMS_SCHEDULER.define_program_argument (program_name => 'SP_PURGE_PROG', argument_name => 'i_num_days', argument_position => 1, argument_type => 'NUMBER', default_value => 30);
exec dbms_scheduler.enable(name => 'SP_PURGE_PROG');
exec dbms_scheduler.create_schedule (schedule_name => 'SP_PURGE_SCHED', repeat_interval =>  'freq=weekly; byday=SUN; byhour=0; byminute=20',end_date => null, comments => 'Schedule for Statspack purge');
exec dbms_scheduler.create_job (job_name => 'SP_PURGE_JOB', program_name => 'SP_PURGE_PROG', schedule_name => 'SP_PURGE_SCHED',  enabled => TRUE, auto_drop => FALSE, comments => 'Statspack Job for purge');

If you’re using older versions like 12cR1, some events may be missing and you should add them (does not concern 19c):

-- Insert missing idle events on 12cR1 only: have a look at this blog post
delete from STATS$IDLE_EVENT;
insert into STATS$IDLE_EVENT select name from V$EVENT_NAME where wait_class='Idle';
insert into STATS$IDLE_EVENT values('log file parallel write');
insert into STATS$IDLE_EVENT values('target log write size');
commit;

Now it’s time to lock the PERFSTAT user, using Statspack report feature is usually done connecting with SYS:

-- Lock the perfstat user
conn / as sysdba
alter user perfstat account lock;

Now with SYS let’s check the scheduling of both jobs:

-- Check jobs's scheduling
set lines 140
col owner for a10
col job_name for a15
col first for a18
col next for a18
col last for a18
col program_name for a20
select OWNER, JOB_NAME, PROGRAM_NAME, to_char(START_DATE,'YYYY/MM/DD HH24:MI') "FIRST", to_char(NEXT_RUN_DATE,'YYYY/MM/DD HH24:MI') "NEXT", to_char(LAST_START_DATE,'YYYY/MM/DD HH24:MI') "LAST"  from dba_scheduler_jobs where owner='PERFSTAT';

OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		 NEXT		  LAST
---------- --------------- -------------------- ---------------- ---------------- ----------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45 2021/12/24 14:45
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG	2021/12/25 00:20 2021/12/25 00:20

Let’s check again later including the run and failure counts, the actual retention (how many days of snapshots are available) and volume of data for these performance metrics:

-- Check later after several occurences
set lines 140
col owner for a10
col job_name for a15
col first for a18
col next for a18
col last for a18
col program_name for a20
select OWNER, JOB_NAME, PROGRAM_NAME, to_char(START_DATE,'YYYY/MM/DD HH24:MI') "FIRST", to_char(NEXT_RUN_DATE,'YYYY/MM/DD HH24:MI') "NEXT", to_char(LAST_START_DATE,'YYYY/MM/DD HH24:MI') "LAST", run_count, failure_count  from dba_scheduler_jobs where owner='PERFSTAT';

OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		   NEXT 	      LAST		  RUN_COUNT FAILURE_COUNT
---------- --------------- -------------------- ------------------ ------------------ ------------------ ---------- -------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45   2022/01/02 17:15   2022/01/02 17:00		874		0
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG	2022/01/02 00:20   2022/01/09 00:20   2022/01/02 00:20		  1		0


-- Check actual retention
SQL> select round(sysdate-min(snap_time)) "RETENTION" from stats$snapshot;

 RETENTION
----------
	 7

-- Check data volume
SQL> select round(sum(bytes)/1024/1024,1) "PERFSTAT MB" from dba_segments where owner = 'PERFSTAT';

PERFSTAT MB
-----------
      742.8



...


OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		   NEXT 	      LAST		  RUN_COUNT FAILURE_COUNT
---------- --------------- -------------------- ------------------ ------------------ ------------------ ---------- -------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45   2022/01/04 19:15   2022/01/04 19:00	       1074		0
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG	2022/01/02 00:20   2022/01/09 00:20   2022/01/02 00:20		  1		0


 RETENTION
----------
	 9


PERFSTAT MB
-----------
      940.8

...

OWNER	   JOB_NAME        PROGRAM_NAME         FIRST              NEXT               LAST                RUN_COUNT FAILURE_COUNT
---------- --------------- -------------------- ------------------ ------------------ ------------------ ---------- -------------
PERFSTAT   SP_SNAP_JOB     SP_SNAP_PROG         2021/12/24 14:45   2022/01/17 14:30   2022/01/17 14:15         2303             0
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG        2022/01/02 00:20   2022/01/23 00:20   2022/01/16 00:20            3             0


 RETENTION
----------
	12


PERFSTAT MB
-----------
     1751.8

Data volume expected

Statspack metrics can use several GB on each database depending on the snapshots’ frequency and retention. Here is what you can expect depending on your settings, each snapshot being about 1MB:

  • 1 month – 1 snap/h: ±750MB
  • 1 month – 2 snaps/h: ±1500MB
  • 1 month – 4 snaps/h: ±3000MB

Conclusion

Statspack is still a very capable and usable tool on 19c. Yes this is old fashioned, but it’s free and it does the job for those DBA who know how to read the reports.