Archive

Tag Archives: tracing

This post is about how to use gdb, which is a debugger, so very simplistically put an aid for looking at C programs, as a profiler. I use gdb quite a lot for profiling because it’s the easiest way for profiling for me.

Lots of people which I know use other tools like perf, systemtap and dtrace for the same purpose and that’s fine. Each tools has its own advantages and disadvantages. One disadvantage of gdb is that it’s using ptrace to attach to a process, which makes it dead slow from a machine perspective, because everything it then does goes via another process, which is the debugger. That is how the debugger works.

Also lots of people use gdb like I do, and use basic functionality, which is breaking at functions, which makes it possible to find out the sequence of how functions are called, generating backtraces (stack traces) to understand the stack and maybe looking at functions arguments.

But there is more that you can do. Way more actually. These examples use the gdb that comes with RHEL7/OL7 via yum.

You can break conditionally at a function, which means that a break will be executed only if the condition is met. For example:

break sltrgatime64 if $caller_is("ksbabs")

This only breaks at a function called sltrgatime64 if it is called from a function that is called ksbabs.

But for what I wanted to do and investigate, I didn’t only want to do “something” when the sltrgatime64 function was called from ksbabs, I wanted be able to distinguish the two times this was called as the first one and the second one.

I knew that the “first time” was actually right after a function with the name of “dbktFlush” was called. So here’s what I did:

set $status = -1

break dbktFlush if $caller_is("ksbabs")
commands
 silent
 set $status = 0
 c
end

break sltrgatime64 if $caller_is("ksbabs")
commands
 silent
 if $status == 0
   set $status = 1
   printf "first invocation\n"
   c
 end
 if $status == 1
  set $status = 2
  printf "second invocation\n"
  c
 end
 if $status > 1
  printf "error. third or more invocations\n"
 end
end

This is much more advanced than using a single command, but I think for anyone who is able to program a few languages is reasonably simple to understand.

There’s two breaks, and both are conditional, the function from which the function to break on must be ksbabs.
In the break of dbktFlush the (convenience) variable $status is set to 0, indicating the the first invocation for the other break.

In the second break, there are 3 if statements. The first one picks up the variable $status set to 0, indicating the first invocation and then sets it to 1 for the second invocation and the second if picks up the second invocation indicated by the variable $status set to 1 and sets it to 2, so if I didn’t look carefully and there are more invocations of sltrgatime64, I will get notified.

If breaks would be activated when the second call to sltrgatime64 was executed, it would not come across dbktFlush, and thus the $status variable would (still) be set to -1, and the break would skip all ifs.

Of course this is a very specific case, you should tailor it for your own needs. However, this was the actual investigation I done to investigate the function of the above sltrgatime64 calls.

In fact, I gone a little deeper and wanted to understand what the timing of of these two reads of the wall clock time did. For that, I used the python interface in gdb, for which the support seems to be compiled in in the rpm based version.

You can use python, and use variables in python that remain state between gdb interactions. I used a dead simple interaction; calling time.time() to time the invocation of the sltrgatime64 executed right after dbktFlush was called:

python import time
python var_time=time.time()

break dbktFlush if $caller_is("ksbabs")
commands
 silent
 set $status = 0
 c
end

break sltrgatime64 if $caller_is("ksbabs")
commands
 silent
 if $status == 0
   set $status = 1
   python print time.time()-var_time
   python var_time=time.time()
   c
  else
   c
 end
end

If you set single logwriter to true and execute this against the logwriter, you will see that the logwriter tries to keep a 3 second pace by adjusting the sleep time on semtimedop().

My actual goal was to try to find out what is exactly timed for the message ‘Warning: log write elapsed time ????ms, size ?KB’ in the logwriter trace file, which Nenad Noveljic proved is not only about the actual IO time.

To be honest, I done it clumsy by going backward measuring all the calls that measured wall clock time and should have put a break on write and look at the backtrace. When I executed a backtrace of the write call that wrote the trace file entry, I could see this line was written in kcrfw_postprocess_write().

This is the gdb script with the timing:

python import time

break skgstmGetEpochTs if $caller_is("kcrfw_redo_write_driver")
commands
 silent
 python var_time=time.time()
 c
end

break kcrfw_do_write
 commands
 silent
 print $rip
 c
end

break skgstmGetEpochTs if $caller_is("kcrfw_postprocess_write")
commands
 silent
 shell sleep 1
 python print time.time()-var_time
 c
end

What does gdb script does, is always set the python variable var_time, which includes the idle cycles of the logwriter. If it goes into the function kcrfw_do_write it actually is going to write. That is also when potentially the line “Warning: log write elapsed time 1024ms, size 0KB” could be produced.
when the function skgstmGetEpochTs is called in the function kcrfw_postprocess_write, it executes a sleep for 1 second, and then prints the current time minus the time obtained when kcrfw_redo_write_driver was entered.

What this proofs, is that the timing for the log write time elapsed time warning is done from the start of the kcrfw_redo_write_driver function. That means that everything that the logwriter needs to do, excluding the the last bit of what the logwriter performs, posting the foregrounds, is timed, including dealing with the internal structures that keep track of the redo strands and the latching of it.

Advertisement

For some time now, I am using gdb to trace the inner working of the Oracle database. The reason for using gdb instead of systemtap or Oracle’s dtrace is the lack of user-level tracing with Linux. I am using this on Linux because most of my work is happening on Linux.

In order to see the same information with gdb on the system calls of Oracle as strace, there’s the Oracle debug info repository. This requires a bit of explanation. When strace is used on a process doing IO that Oracle executes asynchronous, the IO calls as seen with strace look something like this:

io_submit(140425370206208, 1, {{0x7fb7516c4bc0, 0, 0, 0, 257}}) = 1
io_getevents(140425370206208,1,128,{{0x7fb7516c45e8,0x7fb7516c45e8,106496,0}}, {600, 0}) = 1

This reveals exactly how Oracle used these calls. In case you wonder how to read these calls: Linux (as well as any other Unix like operating system) provides man pages (manual pages) for not only for the command line tools, but also on system calls, c library functions and device and special files, among others. So if you wonder what the io_submit line means, type ‘man io_submit’, or to be 100% sure you look in the manual pages of the system calls, type ‘man 2 io_submit’ to specify you want section 2: system calls.

When I use gdb, and break on io_submit and io_getevents, I get this information:

Breakpoint 1, 0x00007fa883926660 in io_submit () from /lib64/libaio.so.1
Breakpoint 1, 0x00007fa883926660 in io_submit () from /lib64/libaio.so.1
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()

I think everybody can spot that I got less information now. In fact, I now know the calls have happened, and that’s all, there is no additional information. In order to get part of the information back that was visible with strace, use the debuginfo package of libaio. The debug info package must match 100% the version of the package it is meant to provide debug symbols about, because it provides debug information about the executable or library based on physical code locations.

In order to get information on these specific calls (libaio calls), the libaio-debuginfo package can be installed. Once done, we get a great deal of information which resembles strace:

Breakpoint 1, io_submit (ctx=0x7ff8b626c000, nr=1, iocbs=0x7fffa5c31a80) at io_submit.c:23
23	io_syscall3(int, io_submit, io_submit, io_context_t, ctx, long, nr, struct iocb **, iocbs)
Breakpoint 2, io_getevents_0_4 (ctx=0x7ff8b626c000, min_nr=2, nr=128, events=0x7fffa5c37b68, timeout=0x7fffa5c38b70) at io_getevents.c:46
46		if (ring==NULL || ring->magic != AIO_RING_MAGIC)

This shows all the arguments which are used by the process which is traced with gdb. Please mind that gdb breaks on entering the call, so it doesn’t give a return code. And the return code of io_getevents() is what returns the number of IO’s which are ready, so that information is still not visible, but is visible with strace, which does provides the return code.

How about the Oracle user land calls? I use breaking on kslwtbctx() and kslwtectx() a lot, which indicate the starting (kslwtbctx()) and stopping (kslwtectx()) of a wait event. When doing so, this is how it looks like:

Breakpoint 1, 0x00007f40a05c3660 in io_submit () from /lib64/libaio.so.1
Breakpoint 1, 0x00007f40a05c3660 in io_submit () from /lib64/libaio.so.1
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()
Breakpoint 4, 0x0000000007cf47b6 in kslwtbctx ()
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()
Breakpoint 5, 0x0000000007cfb4f2 in kslwtectx ()

Here we see the libaio functions again, together with the Oracle wait event functions. When using these calls this way, we can safely say that there are some calls done outside of a wait, and one call is done inside of a wait. Because this measurement is done on a well known piece of Oracle code (well known to me: executing a full table scan via direct path), I just know the wait is ‘direct path read’. But what if you do not know? Wouldn’t it be nice to know which wait is called here?

The simplest way to get more information on Oracle function calls is to get the debug information for the Oracle database. However, since that makes references to the source code, that will probably never happen. So, does that mean this is all we can get? No.

In order to get more information out of a function call, we need to dive a little deeper into the internals of Linux x86_64. When a function is called, the arguments are passed on via processor registers. This is implementation specific, and differs between 32-bit and 64-bit. An overview of how that works is summarised in this table. The important line is: “The first six integer or pointer arguments are passed in registers RDI, RSI, RDX, RCX, R8, and R9, while XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6 and XMM7 are used for floating point arguments. For system calls, R10 is used instead of RCX.”

So. This means that if I look at the CPU registers when breaking on a function, there might be something usable. I say “something usable” deliberately, because the Oracle function calls are not publicly documented (I think/hope they are inside Oracle development). I’ve done some investigation, and it turns out that at the END of a wait event, there are a few functions which are called which have some information stored in a CPU register which is useful:
a) First the function kslwtectx() is called to mark the ending of a wait event.
b) Then a function called kslwtrk_enter_wait_int is called, which stores the time the took in the register R13.
c) Next a function called kskthewt is called, which stores the number of the wait event (V$EVENT_NAME.EVENT#) in RSI.

If we combine that information in a little gdb macro, it looks like this:

break kslwtbctx
  commands
    silent
    printf "kslwtbctx\n"
    c
  end
break kslwtectx
  commands
    silent
    printf "kslwtectx -- "
    c
  end
break kslwtrk_enter_wait_int
  commands
    silent
    set $time=$r13
    c
  end
break kskthewt
  commands
    silent
    printf "wait: %d, time: %d\n", $rsi, $time
    c
  end

Put this in a text file, and once attached to a process to trace with gdb, load it using ‘source ‘.
Here is how it looks like when you put it on a process (I’ve put it on the checkpoint process):

kslwtbctx
kslwtectx -- wait: 7, time: 2999054
kslwtbctx
kslwtectx -- wait: 81, time: 1979
kslwtbctx
kslwtectx -- wait: 81, time: 1050
kslwtbctx
kslwtectx -- wait: 81, time: 1216
kslwtbctx
kslwtectx -- wait: 81, time: 2031
kslwtbctx
kslwtectx -- wait: 83, time: 10443

If you want to learn more about this stuff, don’t forget I will be doing a hands-on session on using gdb as a pre-conference training during Collaborate 2014 in Las Vegas.

%d bloggers like this: