Archive

Linux

For some time now, I am using gdb to trace the inner working of the Oracle database. The reason for using gdb instead of systemtap or Oracle’s dtrace is the lack of user-level tracing with Linux. I am using this on Linux because most of my work is happening on Linux.

In order to see the same information with gdb on the system calls of Oracle as strace, there’s the Oracle debug info repository. This requires a bit of explanation. When strace is used on a process doing IO that Oracle executes asynchronous, the IO calls as seen with strace look something like this:

io_submit(140425370206208, 1, {{0x7fb7516c4bc0, 0, 0, 0, 257}}) = 1
io_getevents(140425370206208,1,128,{{0x7fb7516c45e8,0x7fb7516c45e8,106496,0}}, {600, 0}) = 1

This reveals exactly how Oracle used these calls. In case you wonder how to read these calls: Linux (as well as any other Unix like operating system) provides man pages (manual pages) for not only for the command line tools, but also on system calls, c library functions and device and special files, among others. So if you wonder what the io_submit line means, type ‘man io_submit’, or to be 100% sure you look in the manual pages of the system calls, type ‘man 2 io_submit’ to specify you want section 2: system calls.

When I use gdb, and break on io_submit and io_getevents, I get this information:

Breakpoint 1, 0x00007fa883926660 in io_submit () from /lib64/libaio.so.1
Breakpoint 1, 0x00007fa883926660 in io_submit () from /lib64/libaio.so.1
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()

I think everybody can spot that I got less information now. In fact, I now know the calls have happened, and that’s all, there is no additional information. In order to get part of the information back that was visible with strace, use the debuginfo package of libaio. The debug info package must match 100% the version of the package it is meant to provide debug symbols about, because it provides debug information about the executable or library based on physical code locations.

In order to get information on these specific calls (libaio calls), the libaio-debuginfo package can be installed. Once done, we get a great deal of information which resembles strace:

Breakpoint 1, io_submit (ctx=0x7ff8b626c000, nr=1, iocbs=0x7fffa5c31a80) at io_submit.c:23
23	io_syscall3(int, io_submit, io_submit, io_context_t, ctx, long, nr, struct iocb **, iocbs)
Breakpoint 2, io_getevents_0_4 (ctx=0x7ff8b626c000, min_nr=2, nr=128, events=0x7fffa5c37b68, timeout=0x7fffa5c38b70) at io_getevents.c:46
46		if (ring==NULL || ring->magic != AIO_RING_MAGIC)

This shows all the arguments which are used by the process which is traced with gdb. Please mind that gdb breaks on entering the call, so it doesn’t give a return code. And the return code of io_getevents() is what returns the number of IO’s which are ready, so that information is still not visible, but is visible with strace, which does provides the return code.

How about the Oracle user land calls? I use breaking on kslwtbctx() and kslwtectx() a lot, which indicate the starting (kslwtbctx()) and stopping (kslwtectx()) of a wait event. When doing so, this is how it looks like:

Breakpoint 1, 0x00007f40a05c3660 in io_submit () from /lib64/libaio.so.1
Breakpoint 1, 0x00007f40a05c3660 in io_submit () from /lib64/libaio.so.1
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()
Breakpoint 4, 0x0000000007cf47b6 in kslwtbctx ()
Breakpoint 2, 0x000000000082d7d8 in io_getevents@plt ()
Breakpoint 5, 0x0000000007cfb4f2 in kslwtectx ()

Here we see the libaio functions again, together with the Oracle wait event functions. When using these calls this way, we can safely say that there are some calls done outside of a wait, and one call is done inside of a wait. Because this measurement is done on a well known piece of Oracle code (well known to me: executing a full table scan via direct path), I just know the wait is ‘direct path read’. But what if you do not know? Wouldn’t it be nice to know which wait is called here?

The simplest way to get more information on Oracle function calls is to get the debug information for the Oracle database. However, since that makes references to the source code, that will probably never happen. So, does that mean this is all we can get? No.

In order to get more information out of a function call, we need to dive a little deeper into the internals of Linux x86_64. When a function is called, the arguments are passed on via processor registers. This is implementation specific, and differs between 32-bit and 64-bit. An overview of how that works is summarised in this table. The important line is: “The first six integer or pointer arguments are passed in registers RDI, RSI, RDX, RCX, R8, and R9, while XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6 and XMM7 are used for floating point arguments. For system calls, R10 is used instead of RCX.”

So. This means that if I look at the CPU registers when breaking on a function, there might be something usable. I say “something usable” deliberately, because the Oracle function calls are not publicly documented (I think/hope they are inside Oracle development). I’ve done some investigation, and it turns out that at the END of a wait event, there are a few functions which are called which have some information stored in a CPU register which is useful:
a) First the function kslwtectx() is called to mark the ending of a wait event.
b) Then a function called kslwtrk_enter_wait_int is called, which stores the time the took in the register R13.
c) Next a function called kskthewt is called, which stores the number of the wait event (V$EVENT_NAME.EVENT#) in RSI.

If we combine that information in a little gdb macro, it looks like this:

break kslwtbctx
  commands
    silent
    printf "kslwtbctx\n"
    c
  end
break kslwtectx
  commands
    silent
    printf "kslwtectx -- "
    c
  end
break kslwtrk_enter_wait_int
  commands
    silent
    set $time=$r13
    c
  end
break kskthewt
  commands
    silent
    printf "wait: %d, time: %d\n", $rsi, $time
    c
  end

Put this in a text file, and once attached to a process to trace with gdb, load it using ‘source ‘.
Here is how it looks like when you put it on a process (I’ve put it on the checkpoint process):

kslwtbctx
kslwtectx -- wait: 7, time: 2999054
kslwtbctx
kslwtectx -- wait: 81, time: 1979
kslwtbctx
kslwtectx -- wait: 81, time: 1050
kslwtbctx
kslwtectx -- wait: 81, time: 1216
kslwtbctx
kslwtectx -- wait: 81, time: 2031
kslwtbctx
kslwtectx -- wait: 83, time: 10443

If you want to learn more about this stuff, don’t forget I will be doing a hands-on session on using gdb as a pre-conference training during Collaborate 2014 in Las Vegas.

This blogpost is about how to print the system call arguments of a system call which is caught with ‘catch’ or ‘break’ in gdb. The reason for this blogpost is I spend quite some time on searching for this, and working around this, so writing it in a blogpost might help others who spend (some of) their time in the gdb debugger, and encounter the same issue.

When you break on a system call in gdb, it will show you something like this:

Breakpoint 2, semctl () at ../sysdeps/unix/syscall-template.S:82
82 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)

All fine, but most of the people who break on something want to know what the arguments of the call are! We are actually breaking on what is called a “syscall wrapper” (explanation in link). This means we do not have the arguments of the system call nicely shown on screen, as we would like (will, I do!).

An example of a system call which does show it’s arguments nice and dandy on screen is “io_submit ()”:

Breakpoint 3, io_submit (ctx=0x7f42badba000, nr=1, iocbs=0x7fff0cc0f30) at io_submit.c:23
23      io_syscall(in, io_submit, io_submit, io_context_t, ctx, long, nr, struct iocb **, iocbs)

This allows me to look into the arguments, like for example:

(gdb) print *iocbs[0]

But now back to the main problem: for some system calls, when I break on them, I end up in the system call wrapper, not showing any arguments. After some time, I found this answer on stackoverflow. Which is exactly what I needed: the system call arguments are passed via registers!!

Let me show you how this works. I was investigating the system call “nanosleep()” recently. In order to understand its arguments, first issue “man nanosleep” to understand the arguments:

NAME
      nanosleep - high-resolution sleep

SYNOPSIS
      #include <time.h>

      int nanosleep(const struct timespec *req, struct timespec *rem);

So, when nanosleep is called, it passes the pointers to two structs of the type “timespec”. The first one is a constant (it determines the specifics of the call), the second one is used if the nanosleep was interrupted with a signal, which means the remaining time is written in it. For the sake of understanding what is happening, I really only want to know what is in the first struct (*req).

So this is how I setup a break on nanosleep() in gdb, and print the argument I want to see:

(gdb) break nanosleep
Breakpoint 2 at 0x32e0e0ef10: file ../sysdeps/unix/syscall-template.S, line 82. (2 locations)
(gdb) commands
Type commands for breakpoint(s), 2, one per line.
End with a line saying just "end".
>print (struct timespec) *$rdi
>c
>end
(gdb) c
Continuing.

Breakpoint 2, nanosleep () at ../sysdeps/unix/syscall-template.S:82
82 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
$1 = {tv_sec = 0, tv_nsec = 867779000}

There you go: we got the argument of a system call (that I was interested in) that was “hidden” by the linux system call wrapper.

In my blogpost When the oracle wait interface isn’t enough I showed how a simple asynchronous direct path scan of a table was spending more than 99% of it’s time on CPU, and that perf showed me that 68% (of the total elapsed time) was spent on a spinlock unlock in the linux kernel which was called by io_submit().

This led to some very helpful comments from Tanel Poder. This blogpost is a materialisation of his comments, and tests to show the difference.

First take a look at what I gathered from ‘perf’ in the first article:

# Samples: 501  of event 'cpu-clock'
# Event count (approx.): 501
#
# Overhead  Command       Shared Object                               Symbol
# ........  .......  ..................  ...................................
#
    52.50%   oracle  [kernel.kallsyms]   [k] _raw_spin_unlock_irqrestore    
             |
             --- _raw_spin_unlock_irqrestore
                 mptspi_qcmd
                 scsi_dispatch_cmd
                 scsi_request_fn
                 __blk_run_queue
                 queue_unplugged
                 blk_flush_plug_list
                 blk_finish_plug
                |          
                |--99.24%-- do_io_submit
                |          sys_io_submit
                |          system_call_fastpath
                |          io_submit
                |          skgfqio
                |          ksfd_skgfqio
                |          ksfdgo

This shows 52.5% of the time of profiling a “select count(*) from t2″ on the server process was spending it’s time on unlocking a spinlock.

This was in the previous blogpost, and tanel commented the following:

I would be suspicious of any of the “irqrestore” functions shown as the main CPU cycle consumer – as right after enabling interrupts again on a CPU may be just the first chance for the profiler interrupt to kick in and do the RIP and stack backtrace read. This is highly dependent on the hardware (how new CPUs) and OS version + VM version + whether the VM allows the guest OS to use hardware performance counters directly.

Let’s reiterate what I was doing: I was profiling the execution using Linux’ in-kernel perf functionality, but, because of the lack of access of the kernel’s performance registers because I was running on VMWare Fusion (desktop virtualisation), I was using perf in the following way: perf record -e cpu-clock.

These are a partial list of perf’s triggering events:

List of pre-defined events (to be used in -e):
  cpu-cycles OR cycles                               [Hardware event]
  instructions                                       [Hardware event]
  cache-references                                   [Hardware event]
  cache-misses                                       [Hardware event]
  branch-instructions OR branches                    [Hardware event]
  branch-misses                                      [Hardware event]
  bus-cycles                                         [Hardware event]
  stalled-cycles-frontend OR idle-cycles-frontend    [Hardware event]
  stalled-cycles-backend OR idle-cycles-backend      [Hardware event]
  ref-cycles                                         [Hardware event]

  cpu-clock                                          [Software event]
  ...etc...

If no specific event is specified, perf tries to use ‘cpu-cycles’, which has the indication [Hardware event], which means the kernel’s performance registers are used to gather information. If this is not possible (because virtualisation disables access to the performance registers), the software event ‘cpu-clock’ can be used. This is what I used in the previous article.

However, cpu-clock is a software event. And this event (cpu-clock) is depended on the timer interrupt. And the function we see we spent most time on (_raw_spin_unlock_irqrestore) is the re-enabling of IRQ’s for this process when this spinlock is unlocked. So this _could_ mean we did not spend our time on this function, but can not tell, because the timing source was disabled.

However, there was another helpful comment from Tanel:

VMWare Fusion 5.x should already allow some CPU perf counters to be accessed directly in the VM guest. It requires a new enough CPU though (it works in my late 2011 MBP, but not in the 2009 MBP). There’s a setting under “advanced options” under “processors & memory” -> “You can use code profiling applications such as VTune or OProfile to optimize or debug software that runs inside a virtual machine.”

Indeed, there is such a function, and let’s enable it and try again in EXACTLY the same way, but now using the ‘cpu-cycles’ method (which is default).

# Samples: 669  of event 'cycles'
# Event count (approx.): 288603593
#
# Overhead  Command      Shared Object                                   Symbol
# ........  .......  .................  .......................................
#
    11.31%   oracle  oracle             [.] sxorchk                            
             |
             --- sxorchk
                |          
                |--98.50%-- kcbhxoro
                |          kcbhvbo
                |          kcbzvb
                |          kcbldrget
                |          kcbgtcr
                |          ktrget3
                |          ktrget2
                |          kdst_fetch
                |          kdstf00000010000kmP
                |          kdsttgr
                |          qertbFetch
                |          qergsFetch
                |          opifch2
                |          kpoal8
                |          opiodr
                |          ttcpip
                |          opitsk
                |          opiino
                |          opiodr
                |          opidrv
                |          sou2o
                |          opimai_real
                |          ssthrdmain
                |          main
                |          __libc_start_main
                |          
                 --1.50%-- kcbhvbo
                           kcbzvb

This is radically different! All of a sudden the top function is not a spinlock in the kernel any more, but an Oracle function!

Let’s look at the top 5 locations where time is spend with exactly the same case, but with -e cycles (the default) and -e cpu-clock (non-default/software timer):

# perf record -g -p 2527 
^C
# perf report -n -g none
...
# Samples: 580  of event 'cycles'
# Event count (approx.): 256237297
#
# Overhead      Samples  Command       Shared Object                                   Symbol
# ........  ...........  .......  ..................  .......................................
#
    17.47%          100   oracle  oracle              [.] sxorchk                            
     7.99%           47   oracle  oracle              [.] kdstf00000010000kmP                
     6.01%           35   oracle  oracle              [.] kcbhvbo                            
     3.25%           19   oracle  oracle              [.] kdst_fetch                         
     3.01%           17   oracle  [kernel.kallsyms]   [k] __wake_up_bit        

And now the same execution, but with the software timer:

# perf record -g -p 2527 -e cpu-clock
^C
# perf report -n -g none
...
# Samples: 422  of event 'cpu-clock'
# Event count (approx.): 422
#
# Overhead      Samples  Command      Shared Object                            Symbol
# ........  ...........  .......  .................  ................................
#
    78.67%          332   oracle  [kernel.kallsyms]  [k] _raw_spin_unlock_irqrestore 
     4.03%           17   oracle  oracle             [.] sxorchk                     
     2.13%            9   oracle  oracle             [.] kcbhvbo                     
     1.90%            8   oracle  oracle             [.] kdstf00000010000kmP         
     0.95%            4   oracle  oracle             [.] qeaeCn1Serial                 

This reveals some information: it seems that when profiling with the software timer, the “_raw_spin_unlock_irqrestore” function “eats” a lot of samples, which are “stolen” from the functions where they are spent:
sxorchk has 100 samples with the hardware timer, and 17 with the software timer.
kcbhvbo has 35 samples with the hardware timer, and has 9 with the software timer.
kdstf00000010000kmP has 47 samples with the hardware timer, and has 8 with the software timer.

So, general conclusion is that it’s important to understand what you are measuring, and if that method has implication on what you are measuring.
Conclusion specific to perf: do not use cpu-clock if you can use the hardware event.

This is a quick writeup of an oddity I found while trying to install the vmwareware tools in an Oracle Linux host with the UEK3 kernel enabled (which is by default).

This is what is encountered during the vmware tools installation dialog when running vmwaretools.pl:

Searching for a valid kernel header path...
The path "" is not a valid path to the 3.8.13-16.2.2.el6uek.x86_64 kernel 
headers.
Would you like to change it? [yes]

The building of vmware tools fail because the kernel headers can not be found: the installer doesn’t see the kernel headers, whilst you probably installed it (it’s the kernel-uek-devel package belonging to the running kernel).

The reason is vmwaretools.pl is searching for /usr/src/kernel/KERNELVERSION/include/linux/version.h. And that file is not there anymore. The workaround is to symlink the version.h file from /usr/src/kernels/KERNELVERSION/include/generated/uapi/linux/version.h to its old place:

ln -s /usr/src/kernels/3.8.13-16.2.2.el6uek.x86_64/include/generated/uapi/linux/version.h /usr/src/kernels/3.8.13-16.2.2.el6uek.x86_64/include/linux/version.h

Oracle has done a great job with the wait interface. It has given us the opportunity to profile the time spend in Oracle processes, by keeping track of CPU time and waits (which is time spend not running on CPU). With every new version Oracle has enhanced the wait interface, by making the waits more detailed. Tuning typically means trying to get rid of waits as much as possible.

But what if your execution is optimised to the point that there are (almost) no waits left? Before you think this is theoretical: this is possible, especially with Oracle adaptive direct path reads (which are non Oracle cached IOs), visible by the wait “direct path read”. Of course I am talking about the omission of waits, which happen with adaptive direct path reads if your system is able to provide the request results fast enough. There isn’t a wait because if the IO request result is returned fast enough, the process doesn’t have to wait. Whilst this sounds very obvious, the “traditional” Oracle IO requests (visible with the waits “db file sequential read” and “db file scattered read”) do always generate a wait, no matter how fast the IO requests where.

Here is a trace excerpt from a fill table scan where the IO was fast enough not to generate only a few waits:

PARSE #140145843472584:c=0,e=28,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3321871023,tim=1385308947947766
EXEC #140145843472584:c=0,e=31,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3321871023,tim=1385308947947823
WAIT #140145843472584: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=75579 tim=1385308947947871
WAIT #140145843472584: nam='asynch descriptor resize' ela= 1 outstanding #aio=0 current aio limit=1562 new aio limit=1592 obj#=75579 tim=1385308947947969
WAIT #140145843472584: nam='direct path read' ela= 428 file number=5 first dba=28418 block cnt=126 obj#=75579 tim=1385308947989097
FETCH #140145843472584:c=161976,e=174323,p=20941,cr=20944,cu=0,mis=0,r=1,dep=0,og=1,plh=3321871023,tim=1385308948122218
WAIT #140145843472584: nam='SQL*Net message from client' ela= 249 driver id=1413697536 #bytes=1 p3=0 obj#=75579 tim=1385308948122600
FETCH #140145843472584:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=3321871023,tim=1385308948122689
WAIT #140145843472584: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=75579 tim=1385308948122709
WAIT #140145843472584: nam='SQL*Net message from client' ela= 210 driver id=1413697536 #bytes=1 p3=0 obj#=75579 tim=1385308948122938
CLOSE #140145843472584:c=0,e=15,dep=0,type=3,tim=1385308948555460

The most interesting part of the raw trace file is between the EXEC line and the first FETCH line. There is first a ‘SQL*Net message to client’ wait, then a ‘asynch descriptor resize’ wait, and then a single ‘direct path read’ wait. This is a single wait line for doing IO, while the fetch line shows that 20941 blocks are read by doing physical IO. The fetch line shows that most of the elapsed time (e) is spend on running on cpu (c). This means that details about how those 20941 blocks where read are (except for the single ‘direct path read’ wait) not available.

But what if you want to understand more about what the process is doing here? Except for a few wait lines, all the processing details that waits give are gone. It’s more or less only the PARSE/EXEC/FETCH lines, where the first fetch line contains more than 99% of all the time.

The answer to that on linux is perf. Perf is a profiler that is embedded in the linux kernel (since 2.6.32). I’ve written more about perf, use the search field on this blog find articles on how to setup and use perf. Now let’s see what is happening in this situation: what is Oracle doing to execute the above mentioned SQL (select count(*) from t2)?

I’ve ran perf on the session above with ‘perf record -g -e cpu-clock -p PID’, and the result (with ‘perf report’) is shown below:

    67.58%   oracle  [kernel.kallsyms]  [k] _raw_spin_unlock_irqrestore
             |
             --- _raw_spin_unlock_irqrestore
                |          
                |--99.19%-- mptspi_qcmd
                |          scsi_dispatch_cmd
                |          scsi_request_fn
                |          __blk_run_queue
                |          queue_unplugged
                |          blk_flush_plug_list
                |          blk_finish_plug
                |          generic_file_read_iter
                |          generic_file_aio_read
                |          aio_rw_vect_retry
                |          aio_run_iocb
                |          io_submit_one
                |          do_io_submit
                |          sys_io_submit
                |          system_call_fastpath
                |          io_submit
                |          
                 --0.81%-- __wake_up
                           fsnotify_add_notify_event
                           inotify_handle_event
                           send_to_group
                           fsnotify
                           vfs_write
                           sys_write
                           system_call_fastpath
                           __write_nocancel

     4.40%   oracle  oracle             [.] sxorchk

What is shown here, is that 68% of the time the process ran on CPU, it was spending it’s time in kernel mode ([k]), on a function called _raw_spin_unlock_irqrestore. This function was called in two different ways, but in 99% of the time it came from mptspi_qcmd. This is the device specific kernel driver. What is even more remarkable, is that when we follow the backtrace up (by reading down), that the process was in fact issuing IO’s (the io_submit system call)!

This means that instead of spending time on waiting for IOs to finish, this system is spending time on spinning on a spin lock (alike what is latch in Oracle) for issuing commands to a SCSI device.

The next function in which the Oracle process spend time, is an Oracle function (visible by [.], which means user land function), called sxorchk. This function is a xor check (governed by the db_block_checking parameter).

As a summary: does this means the Oracle wait interface is useless? Of course not. But if the wait interface simply does not provide enough information, like when 99% of the time is only visible as CPU time, you need to step to another layer and investigate there. Perf opens up the CPU time, and is able to tell you how the CPU time is composed.

Recently I am involved in a project which requires a lot of data to be extracted from Oracle. The size of the data was so huge that the filesystems filled up. Compressing the output (using tar j (bzip2) or z (gzip)) is an obvious solution, but this can only be done after the files are created. This is why I proposed compressing the output without ever existing in uncompressed form.

This solution works with a so called ‘named pipe’, which is something for which I know for sure it can be done on Linux and unix. A named pipe has the ability to let two processes transfer data between each other. This solution will look familiar to “older” Oracle DBA’s: this was how exports where compressed from the “original” export utility (exp).

I’ve created a small script which calls sqlplus embedded in it, and executes sqlplus commands using a “here command”:

mknod /tmp/oracle.pipe p

sqlplus / as sysdba << _EOF
set escape on

host nohup gzip -c < /tmp/oracle.pipe > /tmp/out1.gz \&
spool /tmp/oracle.pipe
select * from dual;
spool off

host nohup gzip -c < /tmp/oracle.pipe > /tmp/out2.gz \&
spool /tmp/oracle.pipe
select * from dual;
spool off

_EOF

rm /tmp/oracle.pipe

First a pipe is created (mknod filename p). As far as I know, this command is the same on Linux and the unixes. This pipe is removed as the last step of the script.

Inside the sqlplus script, I issue the gzip operating system command using the ‘host’ command. The line with the host command starts the gzip command with the pipe as input, and output to a .gz file in /tmp. The process is put in the background using ‘&’.
Next, the sqlplus spool command starts output to the pipe, and I execute a dummy sql (select * from dual).
With ‘spool off’, the output to the pipe is stopped. This makes the gzip process in the background to stop.
Because the gzip process is not compressing anymore to the first file, it can be used for a second time, and more times of course.

The result is two gzipped files:

zcat -v /tmp/out*.gz
/tmp/out1.gz:	SQL> select * from dual;

D                                                                               
-                                                                               
X                                                                               

SQL> spool off
 84.2%
/tmp/out2.gz:	SQL> select * from dual;

D                                                                               
-                                                                               
X                                                                               

SQL> spool off
 84.2%

When you are administering an Exadata or more Exadata’s, you probably have multiple databases running on different database or “computing” nodes. In order to understand what kind of IO you are doing, you can look inside the statistics of your database, and look in the data dictionary what that instance or instances (in case of RAC) have been doing. When using Exadata there is a near 100% chance you are using either normal redundancy or high redundancy, of which most people know the impact of the “write amplification” of both normal and high redundancy of ASM (the write statistics in the Oracle data dictionary do not reflect the additional writes needed to satisfy normal (#IO times 2) or high (#IO times 3) redundancy). This means there might be difference in IOs between what you measure or think for your database is doing, and actually is done at the storage level.

But what if you want to know what is happening on the storage level, so on the level of the cell or actually “cellsrv”, which is the process which makes IO flow to your databases? One option is to run “iostat -x”, but that gives a list that is quite hard readable (too much disk devices); and: it doesn’t show you what the reason for the IO was: redo write? controlfile read? Archivelog? This would especially be great if you want to understand what is happening if your IO behaves different than you expect, and you’ve ruled out IORM.

Well, it is possible to get an IO overview (cumulative since startup)! Every storage server keeps a table of IO reasons. This table can be dumped into a trace file on the cell; to generate a dump with an overview of what kind of IOs are done; use “cellcli” locally on a cell, and enter the following command:

alter cell events="immediate cellsrv.cellsrv_dump('ioreasons',0)";

This doesn’t generate anything useful as output on the command line, except for the name of the thread-logfile where we can find the contents of the dump we requested:

Dump sequence #18 has been written to /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/log/diag/asm/cell/enkcel01/trace/svtrc_15737_14.trc
Cell enkcel01 successfully altered

As an aid for searching your dump in thread-logfile: search (“/” when you use “less” for it), enter the following (using the above example, with sequence #18): “/sequence\ #18″, without ‘”‘.

This is an example from a cell in the Enkitec lab, which I used for this example:

Cache::dumpReasons           I/O Reason Table
2013-10-23 08:11:06.869047*: Dump sequence #18:
Cache::dumpReasons Reason                  Reads Writes
Cache::dumpReasons ------------------------------------
Cache::dumpReasons UNKNOWN                436784 162942
Cache::dumpReasons RedoLog Write               0  80329
Cache::dumpReasons RedoLog Read              873      0
Cache::dumpReasons ControlFile Read       399993      0
Cache::dumpReasons ControlFile Write           0 473234
Cache::dumpReasons ASM DiskHeader IO        4326      4
Cache::dumpReasons BufferCache Read        27184      0
Cache::dumpReasons DataHeader Read          2627      0
Cache::dumpReasons DataHeader Write            0   1280
Cache::dumpReasons Datafile SeqRead           45      0
Cache::dumpReasons Datafile SeqWrite           0    373
Cache::dumpReasons HighPriority Checkpoint Write      0   6146
Cache::dumpReasons DBWR Aged Write             0    560
Cache::dumpReasons ReuseBlock Write            0    150
Cache::dumpReasons Selftune Checkpoint Write      0 116800
Cache::dumpReasons RequestLit Write            0     25
Cache::dumpReasons Archivelog IO               0    255
Cache::dumpReasons TrackingFile IO          2586   2698
Cache::dumpReasons ASM Relocate IO             0    200
Cache::dumpReasons ASM Replacement IO          0     91
Cache::dumpReasons ASM CacheCleanup IO         0   4514
Cache::dumpReasons ASM UserFile Relocate       0   2461
Cache::dumpReasons ASM Redo IO                 0  10610
Cache::dumpReasons ASM Cache IO             1953      0
Cache::dumpReasons ASM PST IO                  0     44
Cache::dumpReasons ASM Heartbeat IO           26 162984
Cache::dumpReasons ASM BlockFormat IO          0   3704
Cache::dumpReasons ASM StaleFile IO            0    675
Cache::dumpReasons OSD Header IO               0    315
Cache::dumpReasons Smart scan              11840      0

Please mind the numbers here are IOs, it doesn’t say anything about the size of the IOs. Also please mind these are numbers of a single cell, you probably have 3, 7 or 14 cells.

In my opinion this IO summary can be of much value during IO performance investigations, but also during proofs of concept.

If the cell has been running for a while, these number may grow very big. In order to make an easy baseline, the IO reason numbers can be reset, so you can start off your test or proof-of-concept run and measure what actually has happened on the cell layer! In order to reset the IO reason table, enter the following command in the cellcli:

alter cell events = "immediate cellsrv.cellsrv_resetstats(ioreasons)"; 

This will reset the IO reasons table in the cell.

PS1: Thanks to Nikolay Kovachev for pointing out the ‘ioreasons’ resetstats parameter. Indeed ‘all’ is way too blunt.
PS2: The IO numbers seem to be the number IO requests the cell has gotten from it’s clients (ASM and database) for data, not for metadata. During a smartscan metadata flows in between the database and the cell server before data is actually served.

Follow

Get every new post delivered to your Inbox.

Join 2,138 other followers

%d bloggers like this: