Oracle database reliable message waits

This blog is about the oracle database wait event ‘reliable message’. It should be noted that normally, this should not be a prominent wait, and if it does so, the most logical cause would be something that is not working as it should, either by oversubscribing something or simply because of a bug.

The reliable message note on My Oracle Support provides a good description:
https://support.oracle.com/epmos/faces/DocContentDisplay?_afrLoop=34595019638128&id=69088.1&_afrWindowMode=0&_adf.ctrl-state=15ief27plw_97

It does tell something about what a reliable message is:
– A message send using the KSR intra-instance broadcast service.
– The message publisher waits on this wait event until all subscribers have consumed the message.
– The publisher waits for up to one second and then retests if all subscribers have consumed the message.

So the first thing to establish is that a reliable message means that the publisher of the message does wait for all subscribers of the channel it published its message on to have finished consuming the message. That is what ‘reliable’ means with reliable message: all subscribers are guaranteed to have consumed the message. Not all messages in the Oracle executable are ‘reliable’; if you take the log writer notification during redo generation in kcrfw_redo_gen_ext>kcrf_commit_force_int for example, it will only signal the log writer if it isn’t signalled before in the first place, and if it does, it’s just a nudge for the log writer to start its task, there is nothing stateful about the message and in the message itself, like an SCN to write up to.

The second thing to think about is that the KSR/reliable message mechanism to my knowledge itself does not suffer from issues or widespread bugs. This means that, for what I’ve seen (for what that’s worth :-D), the KSR code for reliable messaging is unlikely to be an issue itself if you see this event. It is probably the function of the message that the KSR code is trying to publish on the channel to its subscribers that is causing the reliable message wait.

For this to dive deeper into, I first need to restate what I’ve said at the beginning: in order to sensibly investigate long times spend in a wait event, any wait event really, the very first thing you need to look for is if the system the database is running on is functioning in a normal way. In other words: if the issue you see being represented by a wait event is not an issue outside of the database, for which the database has made your work easier because it has timed the manifestation of the issue outside of the database for you, instead of being an issue inside the database.

For the reliable message to be investigated, you need to understand the logic of reliable messaging in the oracle database. Let me take a truncate table as an example; a ‘truncate table ‘ requires the current blocks to be written to disk. To do that, it sends a reliable message to the checkpointer via the ‘kcbo’ channel, for which the description is ‘obj broadcast channel’ channel.

If you freeze the checkpointer process, it will not be able to do anything, and thus it cannot consume the message that is send by the process that executes a truncate. One way of freezing a process is to attach to it with a debugger, like gdb. Of course this only ever should be done in a test environment and not in production, because freezing (background) processes can have all kinds of bad side effects, and it will almost certainly influence anything else that is done.

If you look at the truncating process, you will see the wait event ‘reliable message’. Take the p1 value from the waiting process and execute the following SQL:

-- replace <p1> with the p1 value of the reliable message wait event
select des.id_ksrcdes, des.name_ksrcdes
from x$ksrcctx ctx, x$ksrcdes des 
where to_number(ctx.addr,'XXXXXXXXXXXXXXXX') = <p1> 
and ctx.name_ksrcctx = des.indx;

This shows the channel and the description of the channel the reliable message is waiting for:

ID_KSRCDES                        NAME_KSRCDES
--------------------------------- ---------------------------------------------
kcbo                              obj broadcast channel

To understand more you can use the x$ksrchdl view and link these with the publishers and subscribers of the channel context:

-- inspired by and largely taken from Tanel Poder's channels2.sql script (https://github.com/tanelpoder/tpt-oracle/blob/master/channel2.sql)
-- replace <p1> with the p1 value of the reliable message wait event
select case when bitand(c.flags_ksrchdl,1)=1 then 'PUB ' end || 
       case when bitand(c.flags_ksrchdl,2)=2 then 'SUB ' end || 
       case when bitand(c.flags_ksrchdl,16)=16 then 'INA ' end flags,
       s.sid,
       p.program,
       cd.name_ksrcdes channel_name, 
       cd.id_ksrcdes, 
       c.ctxp_ksrchdl
from   x$ksrchdl c , 
       v$process p, 
       v$session s, 
       x$ksrcctx ctx, 
       x$ksrcdes cd
where 1=1
and to_number(c.ctxp_ksrchdl,'XXXXXXXXXXXXXXXX') = <p1>
and s.paddr(+)=c.owner_ksrchdl 
and p.addr(+)=c.owner_ksrchdl
and c.ctxp_ksrchdl=ctx.addr
and cd.indx=ctx.name_ksrcctx
;

In my case, this shows the following information:

FLAGS          SID PROGRAM                                                      CHANNEL_NAME                             ID_KSRCDES CTXP_KSRCHDL
------------ ----- ------------------------------------------------------------ ---------------------------------------- ---------- ----------------
SUB            258 oracle@oracle-database.local (CKPT)                          obj broadcast channel                    kcbo       000000007AAB3020
PUB            141 oracle@oracle-database.local (M001)                          obj broadcast channel                    kcbo       000000007AAB3020
PUB            142 oracle@oracle-database.local (W005)                          obj broadcast channel                    kcbo       000000007AAB3020
PUB            145 oracle@oracle-database.local (W007)                          obj broadcast channel                    kcbo       000000007AAB3020
PUB            396 oracle@oracle-database.local (TNS V1-V3)                     obj broadcast channel                    kcbo       000000007AAB3020

The important bit here is that this channel; “kcbo” has 4 publishers (“my” session, and the background processes M001, W005 and W007) and one subscriber: CKPT, the checkpoint processes. In this case, the information about the process we are waiting for could be guessed, not only because I caused the waiting situation myself, but because the “final_blocking_session” field in v$session would point you to it. But in the case of multiple subscribers, you can use this query to find all the processes that are currently participating in the channel.

A subscribing session essentially does 3 things of importance to the reliable message wait:
1. It finds a published message in the channel it is subscribed to, and starts acting on the message.
2. It performs the task or tasks that should be done as a subscriber to that channel.
3. It ‘consumes’ the message, leading to end the reliable message wait event for the publisher.

In the case of a truncate, the foreground process executing the truncate finds the handle, adds itself as publisher to the handle, adds the message and then posts the subscriber(s) using ‘semop’ to activate them. Then it goes into a function called ‘ksrpubwait_ctx’, and waits for the subscribers to do their work in the ‘reliable message’ wait event. The wait is implemented as a semaphore wait (semtimedop) with a 1 second timeout, as the MOS note indicates.

The subscriber, which is the checkpointer process, will be woken from its semtimedop “sleep” if idle, and goes off performing its work cycle. One of the calls in the checkpointer work cycle is ksb_act_run>ksb_act_run_int>kcbo_subch, which performs the task of inspecting the kcbo “KSR channel” to see if work is published.

If so, it performs the task of setting the RO enqueues and add a checkpoint to the checkpoint queue of type 10: object reuse checkpoint and post the logwriter, after which the checkpointer is done, which means it will post the publisher.

Once the publisher is posted, which is the foreground, it will stop waiting for the ‘reliable message’ wait event, and continue on working. Depending on the speed of all the processes and lots of other things, the next thing that happens might be that the foreground now will try to uniquely obtain the RO enqueues and will not be able to do that, or the background processes might have finished their work and released the RO enqueues. But these dealings are not the point of this blogpost.

What I do hope you understand when you’ve reached this part, is that the ‘reliable message’ event is part of the database infrastructure that serves different functions, and that you should look into the specific function for which the KSR infrastructure is used instead of the KSR infrastructure itself.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: