The Oracle Exadata IO Resource Manager Limit Clause

The Exadata IO Resource Manager (IORM) is a fundamental piece of the exadata database machine since the beginning. The function of the IORM is to guarantee a database and/or category (set via the database resource manager) gets the share of IO as defined in the resource plan at each storage server. The “share” here means the minimal amount of IO. This is widely known and documented in the Exadata documentation. Having a minimal amount of IO means you can guarantee a service level agreement (SLA).

But what if you want to define the maximal amount of IO a database may get? The case of a maximal amount of IO a database can get is that you can use a database on Exadata, and do not (or very limited) change the performance once more databases get onto the same database machine. So instead of having the minimal amount of IO a database must get guaranteed (which is what is documented in the exadata documentation), having a limit on the maximal amount a database can get?

This is possible through the ‘limit’ clause of the database resource plan (this option is not available to the category plan clause). This clause is not found in the documentation available to me, but is listed when calling the help function in cellcli for the ‘alter iormplan’ command.

How does such a plan look like?

CellCLI> list iormplan detail
name: dm01cel01_IORMPLAN
dbPlan: name=vxone,level=1,allocation=50,limit=10
status: active

This is the IO resource manager plan on one of the cells in a database machine.
What we see here is a plan which says we have a database plan, which says the database ‘vxone’ must always get 50% of all IO capacity at a minimum, and all others (name=other) also get 50%. But now the purpose of this blog: I’ve limited the database ‘vxone’ to only get 10%!

Does it really work? A good practice is to check all things you are told. Especially if it’s something which is widely unknown, and not in the documentation.

Let’s scan a large table without any limit set:

SQL> select count(*) from cg_var;


Elapsed: 00:00:45.97

This is the same SQL, but now with the limit set to 10% (as seen in the iormplan listed above):

SQL> select count(*) from cg_var;


Elapsed: 00:02:27.28

It is clear the limit clause limits the IO capability of this database: scanning the same database using the same SQL on the same table differs! I’ve ran the tests a few times to see if (both) times are consistent, and they are!

  1. Very interesting Frits. This is quite handy for consolidation where many applications are combined on Exadata. We had a client that was most concerned that the first few applications moved to the platform would be disappointed when others were moved later causing them to slow down somewhat from the initial performance. Instance Caging can also be used to limit cpu consumption on the DB tier for the same reason.

  2. Good post, Fritz. IORM is a good feature. I’d just like to point out, however, that IORM is unaware of HCC so limiting the reading of extremely compressed data doesn’t always pay off in CPU relief. For instance, if you limit me to 1MB read, but the payload is actually 10MB (compressed), then I’m going to take a bit more CPU to do “whatever” than if it were just a simple 1MB of uncompressed data.

    * Note: I’m more than aware that filtration occurs on data in compressed form, so a query for a non-existent needle in a haystack costs about the same when applying predicates to 10x HCC versus uncompressed…

    • Thank you for the very useful comment Kevin!

      Because the only documentation of this feature is the help text of the ‘help alter iormplan’ command, I do not know the specific properties of the limit clause. The number in the clause would be a percentage. But of what? CPU slices in a given time or IO requests?

  3. Good post Frits. By the way, the reason the IORM LIMIT clause is only available in the database IORM plan is because Category IORM actually works in tandem with the DBRM (using DBRM plan directives). If you want to specify a hard limit on I/O for your Category plans then use the MAX_UTILIZATION_LIMIT in your DBRM resource plan. By the way this works in the same way for your intradatabase IORM plans (DBRM consumer groups). The difference between using the MAX_UTILIZATION_LIMIT for Category and intradatabase plans and using LIMIT in the interdatabase IORM plan on the storage cells is that the MAX_UTILIZATION_LIMIT also sets a hard limit on CPU allocation as well.

  4. So, what *is* that number anyway?!?
    The number is expressed as a percentage of the total I/O requests but internally it is actually more of a mathematical probability thing. If the gold database is set to 80% and the silver database is set to 40% then the probability of gold getting an I/O request scheduled is twice that of the silver database.

    So, when does IORM engage?
    The Oracle documentation says that IORM will engage and manage IO requests according to your IORM resource plans when the cell becomes heavily loaded (or reaches “saturation”). So what does that mean? The docs a pretty vague on the matter but my understanding is that each cell disk has an I/O queue. When the cell disk I/O gets full (and presumably stays full for a threshold period of time) then I/O requests for that cell disk will be redirected through, and managed by IORM according to your IORM plan directives.

    • “When the cell disk I/O gets full …” should read “When the cell disk I/O queue gets full …”

    • That is what makes the limit clause interesting: it can limit IO even when the system is idle, this is what my example shows. That database machine is *completely* idle, besides my ‘count(*)’ request!
      This also means the documentation is wrong: this acts on a system which has capacity left.

      Indeed the allocation clause only ‘kicks in’ once IO requests are queueing. What is happening then, is the resource manager re-orders IO requests based on the resource manager plan.

      The percentage on a certain level in a plan cannot exceed 100%.

      • That is correct. LIMIT and MAX_UTILIZATION_LIMIT are absolute and active whenever the plan is active. That is why I refer to them as hard limits. It is also why they are useful for stabilizing the performance profile of databases no matter how busy or idle the system is. They are the exceptions to the rule though. All of the other directives are only enforced when the resource is in short supply.

  5. Fairlie rego said:

    IORM has never worked for me. On a v2 machine on I had a testcase where the cluster used to reboot when iorm was enabled

    • I think the exadata specific software is fairly new, and heavily worked on by Oracle. My tests didn’t result in any stability issues, and talking (emailing) to Randy (who did a much more thorough investigation for the expert exadata book) didn’t give me the impression stability is an issue.

      But I totally understand that when the cluster is rebooting under your hands when using a certain function, you get a bit reluctant in using that.

      • I haven’t seen any stability issues with IORM. Sounds like a bug or configuration issue.

        This may be a little off topic so I hope Frits doesn’t mind. There are a couple of parameters you should set to avoid corner case node evictions due to IO brown outs. These should be in your dbmDbTemplate.dbt created by OneCommand. If you are not basing your new databases on this template you should dig into the template and take a look at some of the parameters they are setting in there.


    • Sue Lee said:

      Thanks for posting this, Fritz! The “limit” clause or directive is very handy when consolidating since it helps you achieve more predictable performance. Users in the first database won’t be disappointed by a drop in performance as more databases are added.

      The “limit” directive limits the database’s disk utilization. You can think of disk utilization as the percentage of time that a disk is busy. If there’s an outstanding I/O request to the disk 60% of the time, then the disk utilization is 60%. This is the same definition used by the “%util” statistics shown by “iostat -x”. So if you set a database’s “limit” directive to 25%, it means that the database’s disk utilization is restricted to 25%.

      Whether or not you use this directive, you can use the metrics DB_IO_UTIL_SM and DB_IO_UTIL_LG to monitor the disk utilization of each database for small and large I/Os. We just published MOS note 1337265.1 which contains a script and tips for monitoring I/Os in Exadata.

  6. Sue Lee said:

    The version of the Exadata storage cell software contains a lot of fixes for IORM. One of these fixes addresses this intermittent bug that causes a cell reboot.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: