Profiling PL/SQL is an Oracle feature since Oracle 9i (2001). Yet I haven’t seen anyone actually use the profiler (besides myself, which is the reason for this post). On the other hand, I have seen shiploads of people guess how the time profile of their PL/SQL looks like. I also have seen people instrument their code, which ranges from bad (to little instrumentation, to much instrumentation, wrong timescale (measuring things that take lesser than milliseconds with a granularity of seconds for example), to good (use correct granularity, use Method-R’s ILO).
Profiling and instrumentation go hand-in-hand, they absolutely don’t rule each other out. Instrumentation points you to the chunk(s) of code which takes the most time, and should be always on. Profiling takes timing to the source code line level, and is the logical next step to do when you’ve been pointed to a piece of code that takes a long time. Profiling should be off by default.
I know the basic form of profiling has been described by a number of websites, including the Oracle documentation and Tim’s Oracle-base. To be honest, I haven’t got very much to add. (Just like all the other websites which seems to have taken a copy of oracle-base)
The description of dbms_profiler in this article is about using dbms_profiler on an Oracle 11.2 database. If the profiler package is not loaded (check with ‘desc dbms_profiler’ with a DBA granted user), load it as SYSDBA with ‘@?/rdbms/admin/profload.sql’. The only thing you have to do, is to add a few tables and a sequence in the schema which is executing the profiler package. The most simple way to do do this, is to log on to the database server, and log on with sqlplus with this owner, and execute:
The next thing to do, is add start and stop commands for the profiler in the piece of ‘trouble code’. The code can be either SQL or PL/SQL. In PL/SQL add:
dbms_profiler.start_profiler ( 'profiler run ' || to_char ( sysdate, 'yyyymmddhh24mi' ) ); --start dbms_profiler.stop_profiler; --stop
In SQL add ‘exec’ in front of calling dbms_profiler.
There’s a gotcha you really need to know: the profiler records the source line number. What you can do, and most scripts found on the internet do, is join the profiler output with the ALL_SOURCE view to see what happens at that line. Anonymous PL/SQL blocks are not recorded in ALL_SOURCE, so when you profile anonymous PL/SQL blocks, you get ‘anonymous’ in the unit_owner and unit_name fields.
Let’s look at a complete example:
a) Please mind this is an extremely simple example! I use the SYS user for the sake of simplicity. Don’t do this in official environments!
b) I’ve checked the existence of dbms_profiler. In fact, I needed to load the profiler package in this database.
c) I’ve ran the proftab script to generate the needed tables for the profiler.
d) This is the script I’ve used:
-- script: gtt_and_collection.sql -- setup set serveroutput on; create global temporary table gtt ( col1 number, col2 varchar2(10) ) on commit preserve rows; create or replace procedure insert_example is nr_to_loop number:=1000000; nr number:=0; type col_type is table of gtt%ROWTYPE; col col_type:=col_type(); begin --dbms_profiler.start_profiler('profiler run '||to_char(sysdate,'yyyymmddhh24mi')); dbms_output.put_line('Start: '||to_char(sysdate,'hh24:mi:sssss')); for nr in 1..nr_to_loop loop /* gtt */ insert into gtt (col1, col2) values ( nr, 'AABBCCDDEE' ); /* collection */ col.extend; col(nr).col1 := nr; col(nr).col2 := 'AABBCCDDEE'; end loop; commit; dbms_output.put_line('Stop: '||to_char(sysdate,'hh24:mi:sssss')); --dbms_profiler.stop_profiler; end; / -- execute test exec insert_example; -- clean up --drop procedure insert_example; truncate table gtt; drop table gtt;
Let’s run it:
SQL> @gtt_and_collection Table created. Procedure created. Start: 06:57:25074 Stop: 06:58:25110 PL/SQL procedure successfully completed. Procedure dropped. Table truncated.
Okay, the procedure ran approximately 1 minute. Let suppose this takes too long. Now what are you going to do? Can this be sped up? Is the time taken by insert into the global temporary table? Some would argue this is a temporary table, so it probably exists in memory. Or is it the collection which eats up the time? Or maybe even the commit after the loop?
Well, enable the profiler by removing the ‘–‘ in front of the dbms_profiler calls in the procedure, and run it again! (Please mind flashback database is an incredibly handy feature to redo tests with absolutely the same database. Downside is it means you need to stop the database and mount the database to flashback it. This means other users or developers have downtime)
When it runs with the profiler, nothing extraordinary is seen, it just runs. You have to make a report of the profiler data yourself. This is the script I use:
col owner format a10 col name format a20 col line format 9999 col source format a40 word_wrap col occurances format 9999999 col tot_time_s format 999,999,990.999 col run_comment format a40 col pct format 999 select runid, run_date, run_comment from plsql_profiler_runs; select u.unit_owner OWNER, u.unit_name NAME, d.line# LINE, (select text from all_source where type in ('PACKAGE BODY','FUNCTION','PROCEDURE','TRIGGER') and name = u.unit_name and line=d.line# and owner=u.unit_owner and type=u.unit_type) SOURCE, d.total_occur OCCURANCES, (d.total_time/1000000000) TOT_TIME_S, d.total_time/r.run_total_time*100 "PCT" from plsql_profiler_runs r, plsql_profiler_units u, plsql_profiler_data d where r.runid=&&runid and r.runid=u.runid and r.runid=d.runid and d.unit_number=u.unit_number and (d.total_time/1000000000) > 1 --more than 1 second order by tot_time_s / select (run_total_time/1000000000) tot_time_s from plsql_profiler_runs where runid=&runid; undefine runid
The most notable change with scripts found on the internet is this script orders the output by time spend, which is why I use it, and you probably too. If you profile a real-life piece of code, there is probably more lines of code involved than fit on your screen, so having the output in the order of the source code doesn’t make sense to me.
What is the output?
RUNID RUN_DATE RUN_COMMENT ---------- --------- ---------------------------------------- 1 26-NOV-12 profiler run 201211260638 2 26-NOV-12 profiler run 201211260643 Enter value for runid: 2 old 3: where r.runid=&runid new 3: where r.runid=2 OWNER NAME LINE SOURCE OCCURANCES TOT_TIME_S PCT ---------- -------------------- ----- ---------------------------------------- ---------- ---------------- ---- SYS INSERT_EXAMPLE 15 col.extend; 1000000 2.386 6 SYS INSERT_EXAMPLE 12 insert into gtt (col1, col2) values ( 1000000 32.836 88 nr, 'AABBCCDDEE' ); old 1: select (run_total_time/1000000000) tot_time_s from plsql_profiler_runs where runid=&runid new 1: select (run_total_time/1000000000) tot_time_s from plsql_profiler_runs where runid=2 TOT_TIME_S ---------------- 37.130
Well, this makes it all obvious, doesn’t it? Only two lines of code have spend more than 1 second in total, which are the two shown. The collection management (adding a row) is responsible for 6% of the time, and the insert is responsible for 88%