This post is the result of a question that I got after presenting a session about Oracle database mutexes organised by ITOUG, as a response to the conference cancellations because of COVID-19. Thank Gianni Ceresa for asking me!
The library cache provides shared cursors and execution plans. Because they are shared, sessions can take advantage of the work of previous sessions of creating these. However, by having these shared, access needs to be regulated not to have sessions overwrite each other’s work. This is done by mutexes.
The question I got was (this is a paraphrased from my memory): ‘when using pluggable databases, could a session in one pluggable database influence performance of a session in another pluggable database’?
The answer I gave was that I didn’t test a multi-tenant scenario, but because the library cache is shared between the pluggable databases, it should be possible for a session in one pluggable database to block another session in another pluggable database.
So let’s test it!
I used an Oracle version 20.2 database, which automatically gives you a pluggable database. In fact, even when you don’t specify you want a multi-tenant database, it will create one. This is as expected and documented.
I created an instance called o202, and two pluggable databases, PDB1 and PDB2. Yes, I am a creative genius.
I logged in to PDB1 and executed a ‘select 8 from dual’.
SQL> show con_id CON_ID ------------------------------ 3 SQL> select 8 from dual; 8 ---------- 8
Now using another session in the root container, I dumped the library cache at level 16 to see how that looks like for the ‘select 8 from dual’ cursor:
SQL> show con_id CON_ID ------------------------------ 1 SQL> alter session set events 'immediate trace name library_cache level 16'; Session altered.
(don’t do this on a live environment!)
These are snippets of certain parts of the dump:
Bucket: #=100089 Mutex=0x71f7f8d8(627065225216, 6, 0, 6) LibraryHandle: Address=0x67bcca90 Hash=372d86f9 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD ObjectName: Name=select 8 from dual FullHashValue=f18bf11763dd069341c61ce6372d86f9 Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=1 ContainerUid=1 Identifier=925730553 OwnerIdn=9 ... Concurrency: DependencyMutex=0x67bccb40(0, 1, 0, 0) Mutex=0x67bccbe0(146, 22, 0, 6) ... Child: childNum='0' LibraryHandle: Address=0x6e6b3fd8 Hash=0 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD Name: Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=3 ... Concurrency: DependencyMutex=0x6e6b4088(0, 0, 0, 0) Mutex=0x67bccbe0(146, 22, 0, 6)
This is bucket 100089, with its mutex. The counter for this mutex is the second argument, which is 6.
The library cache handle on the next line is the parent handle. The sql text and the hash value sit with the parent handle.
I am not sure how to interpret the “ContainerId” here, it says 1 (indicating the root container), while this SQL is only executed in container 3, which is shown above.
After the first ‘…’, the two mutexes in the parent handle can be seen: the dependency mutex, and the parent handle mutex.
After the second ‘…’, child number 0 is visible. Here the container is specified from which this child was actually executed.
After the third ‘…’, the concurrency information for the child is shown. The important bit is the dependency mutex is independent/unique, whilst the regular mutex is the same/shared from the parent.
Now what would happen if I execute the exact same SQL (select 8 from dual) in another container; PDB2 alias container id 4?
SQL> show con_id CON_ID ------------------------------ 4 SQL> select 8 from dual; 8 ---------- 8
And look at bucket 100089 again (using the dump of the library cache at level 16):
Bucket: #=100089 Mutex=0x73f81e58(8589934592, 4, 0, 6) LibraryHandle: Address=0x656e2638 Hash=372d86f9 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD ObjectName: Name=select 8 from dual FullHashValue=f18bf11763dd069341c61ce6372d86f9 Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=1 ContainerUid=1 Identifier=925730553 OwnerIdn=9 ... Concurrency: DependencyMutex=0x656e26e8(0, 2, 0, 0) Mutex=0x656e2788(2, 31, 0, 6) ... Child: childNum='0' LibraryHandle: Address=0x656e0ed8 Hash=0 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD Name: Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=3 ... Concurrency: DependencyMutex=0x656e0f88(0, 0, 0, 0) Mutex=0x656e2788(2, 31, 0, 6) ... Child: childNum='1' LibraryHandle: Address=0x655fba58 Hash=0 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD Name: Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=4 ... Concurrency: DependencyMutex=0x655fbb08(0, 0, 0, 0) Mutex=0x656e2788(2, 31, 0, 6)
Executing an identical SQL statement (which is a cursor object in the library cache) in another PDB means the same library cache hash bucket is used, and container id is one of the sharing criteria for a child. Because the statement in both cases has a different container id, the cursors can not be shared. This is very logical if you think about it: both containers essentially are logically isolated databases, and therefore there is no way to tell if the tables, access rights, data, really anything is the same, so the SQL is parsed again for the other container.
This has some implications. Because of identical SQL, it hashes to the same value, which mean both containers need access to the same bucket to find the pointer to the parent handle. Both containers also need access to the same parent and the same heap 0 to obtain the child list to find their child. Both the bucket and the handle have a single mutex that serializes access.
Access to the parent heap 0 is done shared, and remains shared if a compatible child can be found.
However, as soon as a new child needs to be created for the same SQL, it will create the new child handle pinned in exclusive mode and insert it into the hash table/child list in the parent’s heap 0. The new child is pinned in exclusive mode, because it still needs to allocate the child heaps and create the child information. This new child is the newest entry, and therefore is the first one any session would find if it scans for a compatible child.
At this point multi-tenancy makes a difference:
– If a session in the same container starts parsing and follows the procedure of hashing the SQL text, obtaining the bucket, finding the parent handle and then scan the child list for a compatible child, it will find the child in exclusive mode for this container, and waits for the child creation to finish, waiting on the event: ‘cursor: pin S wait on X’.
– If a session in ANOTHER container starts parsing and follows the procedure of hashing the SQL text, obtaining the bucket, finding the parent handle and then scan the child list for a compatible child, it will find the child in exclusive mode, but because it has a different container id, it will skip the child, and continue to scan the child list for compatible children. The scanning means pinning each child that potentially can be compatible, which means has the same container id, in shared mode, and by doing that either finds a compatible child, or if it doesn’t, create one itself.
So what does that mean; the conclusion:
The library cache hash table, the parent cursor and the child list/hash table are shared between all containers. These require unique (library cache hash table/handle) or shared (child list/hash table) access for each database process for usage, but the actual time it is held in these modes is so short that it is negligible.
The main reason for waiting for a cursor/library cache entry is a child pinned in exclusive mode during creation (‘cursor: pin S wait on X’).
When an identical statement is executed in another container, it will skip a child that is created or in the process of being created in another container, even when it’s pinned in exclusive mode.
This means that the multi-tenant option in my opinion does not significantly increase the risk of waiting because of library cache concurrency, specifically because it can skip child cursors pinned exclusively in another container.
Update april 4: I tested having a child from one container being pinned in X mode and running the same SQL in another container in 184.108.40.206.0 (base release of 220.127.116.11) and it works exactly the same as described above. So whilst the library cache hash table and parent handles are shared, the cursor children are specific to a container and do not lock each other out.