(…and always check all direct involved components yourself)
Recently I’ve been asked to look into a performance issue where an import already was taking a week.
Oracle version: 220.127.116.11 64-bit EE
Operating system: SunOS 5.8 Generic_117350-26 sun4u sparc SUNW, Sun-Fire-880
The hardware has 8 CPU’s, 32G RAM and a sun T2000 storage cabinet attached.
There already was a database present on the machine, which was a testdatabase which wasn’t doing anything. Also, it was sized rather small for this kind of machine (25M buffercache; 25M pga (automatic)).
The import took place on a newly created database. Because this database instance configuration was also tremendously small (25M buffercache; 25M pga (automatic)), I decided to kill the current import and set sensible values for some key memory parts to take advantage of the hardware (10G buffercache; 5G pga). Also I asked the unix administrator to mount the oracle data filesystem (one for both databases) with the “forcedirectio” and “noatime” options.
The “forcedirectio” option enables direct IO on the filesystem, to avoid double buffering (rough and short: meaning reads are not buffered by the operating system buffercache), “noatime” disables updating the Inodes on the filesystem on access.
So, with a decent configured database and filesystem configured for optimal IO, we started the import again.
Next, it appeared the testdatabase wasn’t really doing nothing. In fact, it was continuously used and because of the tuning actions, it was performing worse. A look in the performance repository revealed a big fall back in physical IO:
(rightclick, “view image”)
Also, the import still performed the same as before. Taking an enormous time…
What has happened, and what went wrong?
Always check all the compontents involved in a tuning action. The first thing which did go wrong, was a database which was reported to nothing, did approximately 1500 physical IO’s per second (from the database point of view).
Because the testdatabase (which was presumed not to do anything) was sized very small, the oracle buffercache couldn’t cache all the data of its transactions, but it’s IO was buffered on the operating system level (the operating system had plenty of space in it’s memory for that). That means that the IO seen as physical by the database, was in fact all buffered on the operating system level.
By forcing IO to go direct, it removed the ability to cache on the operating system level, which means the IO was really done by the storage cabinet now. Apparently (when looking at the graph), the maximum speed of the IO done by this cabinet was approximately 550 IO’s per second.
I didn’t check what the import was waiting for. A look at the waits of the import session revealed mainly ‘direct path write’ waits, and the ‘seconds_in_wait’ column showed waittimes going as high as 500 (seconds!). Also, the alertlog showed: ‘WARNING: aiowait timed out 1 times’.
Both the ‘direct path write’ wait and the warning in the alertlog point us to IO contention. Some investigation more revealed that this behavior is known on sun (and is in fact CPU contention; we do not get enough priority to be able to write) and solved by both patch 112254-01 and setting the operating system kernel parameter ts_sleep_promote=1 (default 0). In short, this patch alters the behavior of the operating system which could prevent certain processes from getting runnable (and thus waiting for a very long time, making the import last longer than a week).
Okay, how to solve this issues?
1. To reduce the number of IO’s of the testdatabase, scale up the buffercache. Plenty of space is available.
2. Because the number of IO’s is reduced, and the priorisation of runnable processes is altered (ts_sleep_promote) both are able to run perfectly alongside each other.
So, lesson learned: always check all the components involved.