Oracle 11g

Oracle 11g

This course contains the basics of Oracle 11g

Course introduction
Test Your Caliber
Interview Questions
Pragnya Meter Exam

Oracle 11g

Performance Management

oracle Database 11g adds several weapons to the arsenal of the DBA looking to improve performance and scalability. Automatic native compil ation of PL/SQL code means you’ll now get much faster PL/SQL performance, automatically when you upgrade applications to the new release. The new server cache feature dramatically increases performance for SQL queries as well as for PL/SQL functions by retrieving results straight from the cache instead of reexecuting code.There is also a new client-side result caching feature for OCI applications.

The new SQL Plan Management (SPM) feature helps you stabilize SQL execution plans to prevent a performance regression due to un expected changes in the execution plans of SQL statements. By default, in Oracle Database 11g, the database automatically tunes SQL statements by running a new daily task called the Automatic SQL Tuning Advisor task as part of the automated task management feature.

There are major innovations in the use of the Automatic Database Diagnostic Monitor (ADDM), including the ability to run the ADDM in dual modes at the database level and at the cluster level to help diagnose performance problems in an Oracle Real Application Cluster (RAC) environment. There are significant improvements in the automatic workload repository (AWR) baselines, including the new concept of moving window baselines and baseline templates.You can now configure adaptive thresholds for various database alerts based on AWR’s new system_moving_window baseline. The database advisor framework, introduced in Oracle Data base 10g, is extended in this release.

Here are the main topics we discuss in this chapter:

  • SQL and PL/SQL performance improvements
  • New result caching functionality
  • Automatic SQL Tuning Advisor task
  • SQL Access Advisor enhancements
  • ADDM enhancements
  • Adaptive cursor sharing
  • Performance-related changes in Database Control
  • Optimizer and statistics collection enhancements
  • SQL Plan Management

We’ll now review the major new performance-related features in the Oracle Database 11g release, starting with SQL and PL/SQL performance improvements.

SQL and PL/SQL Performance Enhancements

Oracle Database 11g introduces several enhancements in PL/SQL programming and execution.Some of these improvements are transparent, such as the PL/SQL result cache, which caches execution results so that frequently executed identical PL/SQL statements can use those results instead of executing the same statement repeatedly.

In addition to performance-improving enhancements, several language features improve the functionality and usability of PL/SQL programming.You’ll learn about these programming enhancements in Chapter. Let’s turn to a review of the important PL/SQL performance enhancing new features.

Automatic “Native” PL/SQL Compilation

In previous releases of the database,Oracle transformed all PL/SQL code to C code,which in turn was compiled by a third-party C compiler. This posed a problem in organizations that didn’t want to or couldn’t install a C compiler on their servers. In Oracle Database 11g, Oracle directly translates the PL/SQL source code to the DLL for the server.In addition, Oracle bypasses the file system directories by doing the linking and loading itself.

Except setting one new initialization parameter,you don’t need to configure anything to take advantage of the PL/SQL native compilation.This parameter is plsql_code type, which you can use to turn automatic native PL/SQL compilation on and off.Tests per formed by Oracle showed a performance improvement of up to 20 times with native PL/SQL compilation.

Using Real Native Compilation

By default, the database interprets PL/SQL code instead of directly compiling it.You can use PL/SQL native compilation now without any third-party software such as a C compiler or a dynamic link library (DLL) loader. The new initialization parameter plsql_code_type specifies the compilation mode for PL/SQL library units and can take one of two values: interpreted or compiled.

  • If you set the parameter to interpreted, the database will compile all PL/SQL code to PL/SQL bytecode format, and the PL/SQL interpreter engine will execute them.
  • If you set the parameter to native, most PL/SQL code will be compiled to machine code and executed natively without the involvement of an interpreter, thus making execution faster.

The default value of the plsql_code_type parameter is set to interpreted.To turn on native PL/SQL compilation, set the plsql_code_type initialization parameter as follows:


You can change the value of the plsql_code_type para meter dynamically,with either an alter system or alter session state ment.However, the switch in compilation mode won’t affect the PL/SQL units that have already been compiled.Once the database compiles a PL/SQL unit all recompilations will continue to use the original mode in which that unit was compiled, be it native or interpreted.

Oracle Data base 11g generates DLLs from the PL/SQL source code and stores the code in the database catalog from where it loads the code directly without having to stage it on a file system first Oracle claims the following regarding real native PL/SQL compilation:

  • Compilation is twice as fast as C native compilation.
  • The Whetstone Benchmark shows that real native compilation is two-and-a-half timesfaster than C native compilation.

Setting Up a PL/SQL Program Unit for Native Compilation

In this section, we’ll show how to set up the PL/SQL native compilation for a single procedure.To enable native compilation of our test procedure, first change the value of the plsql_code_type parameter from its default value of interpreted to native, either by placing this in the parameter file or by using the alter system or alter session command. Once you do this, the database will natively compile any PL/SQL program unit you create.

SQL> select name, value from v$parameter where name like '%plsql%;     NAME                      VALUE ---------------------    ---------------------- plsql_code_type            INTERPRETED plsql_optimize_level          2 ... 9 rows selected.SQL>

The other way is to take an already created PL/SQL program unit, which is going to be interpreted by default, and use the alter <PLSQL unit type> statement to enable PL/SQL native compilation. In the example here, you first create a test procedure with the plsql_code_ type parameter still set to the default value of interpreted.

SQL> create or replace procedure test_native as
2 begin
3 dbms_output.put_line('Test Procedure.');
4* end test_native;
SQL> /
Procedure created.
SQL> select plsql_code_type
2 from all_plsql_object_settings
3 where name='TEST_NATIVE';

Once you create the test procedure (test_native), you issue the alter procedure statement in the following way to change the code type to native:

SQL> alter procedure test_native compile plsql_code_type=native; Procedure altered.SQL>

You can confirm that the test procedure is now compiled for native execution by executing the following query on the DBA_PLSQL_OBJECT_SETTINGS view:

SQL> select plsql_code_type 2 from all_plsql_object_settings 3* where name='TEST_NATIVE'; PLSQL_CODE_TYPE --------------------------- NATIVESQL>

The plsql_code_type column has the value native for the test procedure named test_native,indicating that the alter procedure statement has succeeded.

Recompiling a Database for PL/SQL Native Compilation

If you want to recompile all existing PL/SQL modules in your database to native, you must start your database in upgrade mode and execute the dbmsupgnv.sql script provided by Oracle.Here are the steps:

  1. Shut down the database in normal or immediate mode:
  2. SQL> shutdown immediate;
  3. Edit the parameter file for the instance, and set the plsql_code_type parameter to native, as shown here:
  4.  plsql_code_type=native
  5. Also make sure the value of the plsql_optimize_level parameter is at least 2 (which is the default):
  6.  plsql_optimize_level=3
  7. Start the database in upgrade mode:
  8.  SQL> connect sys/sammyy1 as sysdba  Connected to an idle instance.  SQL> startup upgrade  ORACLE instance started.  ...  Database opened.
  9. Once the instance comes up, execute the $ORACLE_HOME/rdbms/admin/dbmsupgnv.sql script as the user sys:
  10.  SQL> @$ORACLE_HOP ME/rdbms/admin/dbsupgnv.sql OC>################################################################ DOC>############################################################### DOC> dbmsupgnv.sql completed successfully. All PL/SQL procedures, DOC> functions, type bodies, triggers, and type bodies objects in DOC> the database have been invalidated and their settings set to DOC> native. DOC> DOC> Shut down and restart the database in normal mode and DOC> run utlrp.sql to recompile invalid objects. SQL>
    The dbmsupgnv.sql script updates the execution mode of all PL/SQL modules to native.
  11. Shut down and restart the database after the updating of the PL/SQL units is complete.Since the update invalidates all the PL/SQL units, run the utlrp.sql script located in the $ORACLE_HOME/rdbms/admin directory to recompile all the invalidated PL/SQL units:
  12.  SQL> shutdown immediate; SQL> startup ORACLE instance started. ... Database opened. SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql ... SQL> Rem END utlrp.sql SQL> exit

Once you update your database to compile PL/SQL in native mode, you don’t have to use any alter <PL/SQL unit> commands any longer. By default, the database will now execute all PL/SQL units in native mode.

The effect of the recompilation process isn’t permanent. If you decide to take the database back to the default mode of interpreted execution, follow the same process as shown here, with one exception—just replace the dbmsupgnv.sql script with the dbmsupgin.sql script.

Faster DML Triggers

This is one of the new features of Oracle Database 11g that doesn’t require you to do anything to reap the benefits! All DML triggers run much faster in Oracle Database 11g.According to one of Oracle’s internal tests,there was a 25 percent improvement in the performance speed of a row-level update trigger fired during DML.Please see Chapter 11 for several new triggerrelated features

Adaptive Cursor Sharing

Using bind variables reduces the amount of shared memory the database needs to allocate to parse SQL statements, because bind variables reuse a single cursor for multiple executions of the same (or similar) SQL statement.Bind variables, because they reduce parse time and memory usage, enhance the per formance and scalability of the database, especially when you have a large number of concurrent users.

The initialization parameter cursor_sharing determines which SQL statements can share the same cursor.Setting the cursor_sharing parameter to exact stipulates that only identical statements can share a cursor. Setting the para meter to force means statements that differ in some literals can share a cursor. Setting the parameter to the value similar leads to the same behavior as setting it to force unless the different values of the literals affect the degree of optimization of the execution plan.

However,cursor sharing inherently conflicts with SQL optimization, because specifying literal values instead of bind values provides richer information to the optimizer, leading to the evolution of better plans.For this reason, forced cursor sharing especially could lead to suboptimal execution plans. In some cases, some users of a SQL statement may get highly optimal executions, while others may get quite inferior executions because of the specific values of the actual bindings.

Columns with heavily skewed data distribution need different execution plans based on the actual values of the bind variables in a SQL query, and when you bind variables,you may end up with suboptimal execution plans as a result. Oracle uses the concept of bind peeking, under which the optimizer examines the bind values the first time you execute a statement in order to evolve an optimal execution plan for subsequent executions of that statement.

The optimizer will look at the bind variable values during the first hard parse and base its plan strategy on those values. Bind variable peeking helps you only when the optimizer first generates the execution plan for a new SQL statement.However,if the data is heavily skewed, bind peeking has little value because different data for the bind variables requires different execution plans,making the use of literals a better strategy.For example,if the values the optimizer sees during its “peeking” warrant using an index, it’ll continue to use the index even for other values of the bind variable when a full scan may be a better strategy.

Oracle Database 11g takes a major step to resolve the inherent conflict between cursor sharing and query optimization by introducing the concept of adaptive cursor sharing.Under adaptive cursor sharing, the SQL statements automatically share a cursor.Oracle generates multiple execution plans for a statement that uses bind variables,when it detects that the cost of doing so for a particular SQL statement outweighs the benefit of a lower parse time and memory usage flowing from using the same cursor.Oracle still attempts to keep the number of generated child cursors to a minimum to take advantage of cursor sharing.In a nutshell, what Oracle is attempting to do is avoid the “blind sharing” of cursors while minimizing the number of child cursors.

How Adaptive Cursor Sharing Works

The key to adaptive cursor sharing is the bind sensitivity of a cursor and the concept of a bindaware cursor.A query is considered bind-sensitive if the optimizer performs bind peeking when figuring out the selectivity of the predicates and a change in bind variable values potentially leads to different execution plans. If a cursor in the cursor cache has been marked for bindaware cursor sharing, the cursor is called bind-aware.

Adaptive cursor sharing uses bind-aware cursor matching. Unlike in previous releases, the cursors can be bind-sensitive now. We’ll now show a simple example that illustrates how adaptive cursor sharing works in practice.

Let’s say you have the following SQL query that the database executes multiple times:

 SQL> select * from hr.employees where salary = :1 and department_id = :2;

The previous SQL statement uses two bind variables, one for the salary column and the other for the department_id column.

The first time the database executes the SQL statement,there is a hard parse.If the optimizer peeks at the bind values and uses histograms to compute the selectivity of the predicate with the two bind variables,the cursor is marked as a bind-sensitive cursor.The predicate selectivity information is stored in a cube, let’s say (0.15, 0.0025).Once the database executes the query, it stores the execution statistics of the cursor in the cursorUnder adaptive cursor sharing, the database monitors the query execution of a new SQL statement for a while and gathers information to help decide whether it should switch to bind-aware cursor sharing forthe query.

When the database executes a SQL statement the next time with a different pair of bind values,the database performs a soft parse as usual and compares the execution statistics to those stored in the cursor. Based on all the previous execution statistics for this cursor, the database makes a decision whether to mark this cursor as bind-aware.

If the cursor is marked bind-aware, the database uses bind-aware cursor matching during the next soft parse of the query.Each time the database executes the query, it performs a cursor sharing check using the predicat ’s bind variable selectivity estimates, which are stored in the select ivity cubes.Each plan has a selectivity range or cube associated with it.If the new bind values fall within this range, they will use the same plan.That is, if the selectivity of the predicate with the new pair of bind values is within the existing cube or range of values,the data base uses the same execution plan as that of the existing child cursor.

If the selectivity of the predicate doesn’t fall inside the existing cube, then the database can’t find a matching child cursor. As a result, the database performs a hard parse, generating a new child cursor with a different execution plan. If two hard parses generate an identical execution plan, the child cursors are merged.

Briefly,bind-aware cursor sharing means that the database will share an execution plan when the bind values are roughly equivalent.The optimizer will delineate a selectivity range for an execution plan,and it’ll use the same plan if the new bind values fall within this selectivity range.If the binds are not equivalent according to the selectivity range, the optimizer will generate a new execution plan with a different selectivity range.

Monitoring Adaptive Cursor Sharing

The V$SQL view has several new database views as well as new columns that help you monitor adaptive cursor sharing in the database.Since adaptive cursor sharing is automatic, you don’t have to actually do anything. Here’s a brief description of the new views:

  • V$SQL_CS_HISTOGRAM: Shows the distribution of the execution count across the execution history histogram.
  • V$SQL_CS_SELECTIVITY: Shows the selectivity cubes or ranges stored in cursors for predicates with bind variables.
  • V$SQL_CS_STATISTICS: Contains the execution information gathered by the database to decide on whether it should use bind-aware cursor sharing and includes buffer gets and CPU time, among the statistics.

In addition to these new views, the V$SQL view has two important new columns to support adaptive cursor sharing.The IS_BIND_SENSITIVE column shows whether a cursor is bind-sensitive.The IS_BIND_AWARE column shows whether a cursor in the cursor cache has been marked to use bind-aware cursor sharing. Here’s a simple query showing the two new columns:

SQL> select sql_id, executions, is_bind_sensitive, is_bind_aware from v$sql; SQL_ID                        EXECUTIONS          I    I ------------------          -------------------  ---  --- 57pfs5p8xc07w                  21                 Y    N 1gfaj4z5hn1kf                  4                  Y    N 1gfaj4z5hn1kf                  4                  N    N ... 294 rows selected.SQL>

If the IS_BIND_SENSITIVE column shows Y, it means the optimizer is planning to use multiple execution plans, depending on the value of the bind variable. If the IS_BIND_AWARE column shows Y, it means the optimizer knows that the bind variable values result in different data patterns.In this case, the optimizer may hard-parse the statement.

New Result Caching Functionality

Oracle Database 11g introduces several new caching features that let you utilize memory more efficiently,which results in faster query processing.There are actually two types of caching features: the server result cache that caches SQL query results as well as PL//SQL function results in the SGA and the OCI consistent client cache (client cache) that lets you cache query results on the client. The client cache is especially useful when you’re using large-scale stateles web applications driven by frameworks such as PHP. We’ll review the two types of caching features in the following sections, starting with the server result cache.

Using the Server Result Cache to Enhance Performance

The server result cache is a new concept of Oracle Database 11g that enables the database to cache SQL query and PL/SQL function results in memory.The database serves the results for frequently executed SQL queries and PL/SQL functions straight from the cache instead of reexecuting the actual query or function all over again. You can imagine the dramatic savings in resource usage (for example, IO) as well as the improved response times when you use cached results.Both logical as well as physical IO waits are virtually eliminated since the database fetches the necessary results from memory.The actual server result cache contains a SQL query result cache and a PL/SQL function result cache, both of which share an identical infrastructure.

All database sessions can share the cached results of a query, as long as they share execution plans even partially. Oracle’s internal tests show that the server result cache leads to gains as high as a 200 percent improvement in performance for workloads that are read-intensive.

In Oracle Database 11g, there is a new SGA component called result cache, which is actually part of the shared pool in the SGA. By default,the server result cache uses a small part of the shared pool, even when you don’t explicitly allocate memory for the cache.The defaul maximum size depends on the size of the SGA as well as the memory management method you’re using. However, to set the size for the result cache memory size,you use the result_ache_max_size initialization parameter. If you’re manually managing the shared pool,make sure you increase the shared pool size when you increase the result cache size,since the result cache draws its memory from the shared pool.Once you set the result cache memory size, the automatic shared memory management infrastructure will automatically manage the memory you allocate for the server-side result cache.

The new PL/SQL package DBMS_RESULT_CACHE provides various procedures to administer the result cache feature, including monitoring and managing the cache.The V$RESULT_CACHE_*views let you determine the success of a cached SQL query or a PL/SQL function by determining he cache-hit success of the query or function.

In the following sections, you’ll learn how the two components of the server result cache—the SQL query result cache and the PL/SQL result cache—work. You’ll also learn how to manage the two types of caches, which together make up the server result cache.

SQL Query Result Cache

You can now cache the results of frequently executed SQL query results in the SQL query result cache.It doesn’t take a rocket scientist to figure out that extracting results from the cache takes far less time than actually running the SQL query. You’ll see significant database-wide performance improvements when you use the SQL query result cache.

Query result caching is ideal in the following circumstances:

  • The query processes a large number of rows to yield just a handful of rows or even a single row.
  • The database executes the query frequently, with little or no changes in the data itself.

Thus,although technically speaking you can apply the SQL query result cache to any kind of workload, data warehousing applications are the most common beneficiaries of the cache.

The database automatically invalidates the cached results of a SQL query when there is a change in the data that’s part of the query or a change in any of the objects that are part of the query.

Administering SQL Query Result Caching

Three new initialization parameters—result_cache_mode, result_cache_max_size,and result_cache_max_result—are crucial in managing the server-side result cache.The initialization para meter result_cache_mode determines whether and under what circumstance query result caching will apply.The result_cache_max_size initialization parameter determines the size of the result cache memory allocation. We’ll talk about the three result cache–related initialization parameters in the following discussion.

You can control when the database uses the SQL query result cache by setting the result_cache_mode initialization parameter.If you turn caching on,a Result Cache operator is added to the execution plan of the cached SQL query.The setting of the result_cache_mode parameter determines when the optimizer will add the ResultCache operator to a query’s execution plan.Here are the three possible values for the parameter:

  • The default value of this parameter is manual, which means the result cache operator will be added (results cached) only if you use the new result_cache hint in the SQL query.
  • If you set the value to auto, the cost optimizer will determine when it should cache the query results, based on factors such as the frequency of execution, the cost of execution, and how frequently the database objects that are part of the query are changing.
  • If you set the value to force, the database caches the results of all SQL statements, as long as it’s valid to cache the result. That is, the database will cache the results of all SQL statements where it’s possible to do so.

Here’s how you use the alter system statement to enable result caching in a database:

 SQL> alter system set result_cache_mode = force

The result_cache_mode parameter helps you enable the query result cache at the database level.You can enable just session-level caching by setting the parameter using the alter session statement.You can override the setting of the result_cache_mode initialization parameter by specifying the new optimizer hints result_cache and no_result_cache in order to turn SQL query caching on and off. The hints will override both the auto and force settings of the result_cache_mode parameter.

If you set the result_cache_mode parameter to manual, you must then specify the result_cache hint in a query so the database can cache the query results.The result_cache hint tells the database to cache the current query&r squo;s results and to use those results for future executions of the query or query fragment.If you set the result_cache_mode parameter to auto or force,on the other hand,the database will try to cache the results of all the queries .If you don’t want to cache the results of any query under these circums tances, you must specify the no_result_cache hint in the query.

The no_result_cache hint,like the result_cache hint,over rides the value set for the result_cache_mode parameter  Here’s how you’d use the result_cache hint in a query to turn query caching on for that query:

 SQL> select /*+ result_cache */  avg(income), region  from employees group by region;

Although the result_cache_mode initialization parameter determines whether the database caches the query results,the result_cache_max_size parameter determines the maximum amount of the SGA that the database can allocate to the result cache.Note that the result cache size you set by using this parameter applies to both components of the server-side result cache—the SQL query result cache as well as the PL/SQL result cache.

Even if you don’t set a specific size for the result cache,the result_cache_max_size parameter always has a positive default size,derived mainly from the memory_target parameter (or the sga_target or shared_pool_size parameter,if you configure one of them).By default,the cache takes 0.25 percent of the memory_target parameter’s value,0.5 percent of the sga_target value,and 1 percent of shared_pool_size,if you set it.However, the maximum allocation can’texceed 75 percent of the size of the shared pool.In the following example, we haven’t used the result_cache_max_size parameter in the parameter file.However,the database internally allocated a default amount of 393,216 bytes for the parameter (Maximum Cache Size in the output).In the example,we generate a server result cache usage report emplo ying the DBMS_RESULT_CACHE package:

SQL> set serveroutput on
SQL> exec dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
Block Size = 1024 bytes
Maximum Cache Size = 393216 bytes (384 blocks)
Maximum Result Size = 19456 bytes (19 blocks)
Total Memory = 13412 bytes [0.017% of the Shared Pool]
... Fixed Memory = 10560 bytes [0.013% of the Shared Pool]
... State Object Pool = 2852 bytes [0.004% of the Shared Pool]
... Cache Memory = 0 bytes (0 blocks) [0.000% of the Shared Pool]
PL/SQL procedure successfully completed.

Since we haven’t cached any SQL query results in the cache, the cache memory is 0 bytes, as shown in the output.The maximum value for this parameter is dependent on the operating system you’re using.You can completely disable result caching of any kind by explicitly setting the result_cache_max_size parameter to zero. After setting the parameter to zero, the memory report will show the following:

SQL>exec dbms_result_cache.memory_report;
R e s u l t C a c h e M e m o r y R e p o r t
Cache is disabled.
PL/SQL procedure successfully completed.

The memory_report procedure displays the memory usage reports for the result cache.The result_cache_max_size parameter is static, so you can’t modify it without restarting the database.

You may be wondering,what if a single large SQL query result takes up the entire result cache? By default,any single query result is limited to a maximum of 5 percent of the result ache(the size of which is set with the result_cache_max_size parameter).However,you can modify this default value by using the result_ cache_ max_ result parameter,which specifies the maximum proportion of the result cache any single query result can take up, in percentage terms.You can set any percentage value between 0 and 100 for the result_cache_max_result parameter.

There is a third result cache–related initialization parameter called result_ cache_ remote_ expiration. You can use this dynamic initialization parameter to specify the time (in minutes)for which query results involving tables in remote databases stay valid. The default value of this parameter is 0, meaning the database won’t cache any results involving remote objects. Exercise care when you set this parameter to a positive value, since you may get incorrect results if the remote tables in the query have been modified in the meanwhile.

Viewing the Result Cache with EXPLAIN PLAN

You will notice that the explain plan for any query with a result_cache hint will contain the ResultCache operator,as shown in the following example:

SQL> explain plan for select /*+ result_cache +*/
SQL> select plan_table_output from table (DBMS_XPLAN.DISPLAY());
Plan hash value: 1192169904
| Id | peration | Name | Rows | Bytes | Cost(%CPU) | Time
| 0 | SELECT ST | | 11 | 77 | 4 (25) | 00:00:01 |
| 1 | RESULT CACHE | 4t7p9c4m3xty05wpjas75taw9j | | |
| 2 | HASH GROUP BY | | 11 | 77 | 4 (25) | 00:00:01 |
| 3 | TABLE ACCESS FULL| EMPLOYEES | 107 | 74 | 3 (0) | 00:00:01 |
Result Cache Information (identified by operation id):
1 - column-count=2; dependencies=(HR.EMPLOYEES);
name="select /*+ result_cache +*/ department_id,avg(salary)
from hr.employees
group by department_id"
15 rows selected.

You can use the cache_id value provided in the explain plan to find details about the cached query results using the V$RESULT_CACHE_OBJECTS view, as shown here:

SQL> select status,name,namespacefrom v$result_cache_objects wherecache_id='4t7p9c4m3xty05wpjas75taw9j';STATUS              NAME                                         NAMESPACE---------- ----------------------------------------------------- ---------Published   select /*+ result_cache +*/ department_id,avg(salary)    SQLSQL>

The STATUS column’s value (Published) in the V$RESULT_CACHE_OBJECTS view reveals that the result for this query is available for use. The other possible values for the STATUS column are new (result under construction), bypass, expired and invalid.The NAMESPACE column shows that this is a SQL state ment.The other value for the NAMESPACE column is PLSQL..

Examples of Query Result Caching

We’ll use a simple example to show how the SQL query result cache feature works.First specify result caching for your query using the result_cache hint. Before you create the newly cached SQL query,flush the server result cache as well as the shared pool so you start fresh with the new query whose results you want to store in the server result cache.

SQL> exec dbms_result_cache.flush
PL/SQL procedure successfully completed.
SQL> alter system flush shared_pool;
System altered.

The DBMS_RESULT.CACHE_FLUSH procedure purges the server result cache.In this example,we used a simple invocation of the flush procedure,but you can also specify that the database retain the free memory in the cache as well as the existing cache statistics. Alternately, instead of flushing the entire contents of the server result cache,you can execute the invalidate procedure instead to specify only those results that depend on a specified database object or are owned by a specific user.You most likely are familiar with the alter system flush shared_pool statement,which flushes the shared pool, so you have no prior executions of the test SQL queries in the shared pool already.

Here’s the test SQL query, with the result_cache hint:

SQL> select /*+ result_cache q_name(Query1) */
from emp
order by salary;

Note that we provided the optional q_name parameter within the result_cache hint to specify a name (Query1) for our new SQL query whose results the data base will store in the server result cache.If you check the execution plan for the new SQL query, you’ll see that the query uses the result_cache optimizer hint:

SQL> explain plan for
select /*+ result_cache q_name(Query1) */
from hr.employees
order by salary;
Explained.SQL> @$ORACLE_HOME/rdbms/admin/utlxpls
Plan hash value: 3447538987
| Id | Operation | Name | Rows | Bytes | Cost
(%CPU)| Time |
| 0 | SELECT STATEMENT | | 107 | 1284 | 4 (25) | 00:00:01 |
| 1 | RESULT CACHE | 6xt7gzhawg9jg8m503j2njzdxm | | |
| |
| 2 | SORT ORDER BY | | 107 | 1284 | 4 (25) | 00:00:01 |
| 3 | TABLE ACCESS FULL| EMPLOYEES | 107 | 1284 | 3 (0) | 00:00:01 |
Result Cache Information (identified by operation id):
1 - column-count=2; dependencies=(HR.EMPLOYEES);
name="select /*+ result_cache q_name(Q1) */
PLAN_TABLE_OUTPUT-----------------------------------from hr.employeesorder by salary"15 rows selected.SQL>

Now that you know that the execution plan includes the result cache, execute the actual query, as shown here:

SQL> select /*+ result_cache q_name(Query1) */last_name,salaryfrom hr.employeesorder by salary;

When you first execute the query with the result_cache hint, the result_cache operator checks the server result cache to see whether it already contains the results for this query.If the results are present there,of course, no execution is necessary and the query returns the results from the server result cache. In this example,since there are no prior results in the server result cache (we actually purged the result cache before we issued our query), the first execution ofthe test query will execute the SQL statement and store the results in the server result cache. Execute the same statement again so it can use the cached results now. Query the V$RESULT_CACHE_STATISTICS view to examine the usage statistics for the new query:

SQL> select name, value from v$result_cache_statistics;        NAME                                   VALUE------------------------------------------- ----------Block Size (Bytes)                             1024Block Count Maximum                             384Block Count Current                             32Result Size Maximum (Blocks)                    19Create Count Success                             1Create Count Failure                             0Find Count                                       1Invalidation Count                               0Delete Count Invalid                             0Delete Count Valid                               010 rows selected.SQL>

The create count success column shows a value of 1,meaning the data base successfully created one query whose results will be cached in the server result cache.

The Find Count column also has a value of 1,meaning the cache results were found successfully by the second execution of the query.

The example here used a simple SQL query,but you can just as easily cache the results of an inline view as well by specifying a result_cache hint.Here’s an example:

SQL> select prod_subcategory, revenue 2 from (select /*+ result_cache */ p.prod_category, 3 p.prod_subcategory, 4 sum(s.amount_sold) revenue 5 from products p, sales s 6 where s.prod_id = p.prod_id and 7 s.time_id between to_date ('01-JAN-2007','dd-mon-yyyy') 8 and 9 to_date('31-Dec-2007','dd-MON-yyyy') 10 group by rollup(p.prod_category,p.prod_subcategory))11* where prod_category = 'Women';

In the inline view shown here,some of the optimizations such as view merging and column projec tion aren’t employed.Thus,the view takes a longer time to execute the first time,but the cached results mean that later executions will finish very fast.Note that in addition to caching results of the queries with identical predicate values,the caching feature ensures that queries using a different predicate value for the column inside the where clause (prod_category) will also get back their results quickly.

Here’s a simple example showing the dramatic improvement in query performance when you use the result cache.

  1. First,check to ensure that the database isn’t currently caching query results:
  2. SQL> show parameter result_cache    NAME                            TYPE            VALUE  -----------------------------   --------------  -------------  client_result_cache_lag          big integer       3000  client_result_cache_size         big integer         0  result_cache_max_result            integer           5  result_cache_max_size            big integer       672K  result_cache_mode                   string         MANUAL  result_cache_remote_expiration      integer          0SQL>

    Note that result_cache_mode has the value MANUAL, meaning result caching is turned off at the database level.You can override this by using the result_cache hint in your queries.Of course,you can set the value of the result_cache_mode parameter to either auto or force,as explained earlier, to turn caching on at the database level.

  3. Execute the following query first,and check the time it takes to get the results back:
  4. SQL> select * from  2 (select * from (select manager_id,department_id, max(e.salary) maxsalary  3 from emp1 e  4 group by e.manager_id, department_id)  5 order by maxsalary desc)  6* where rownum =3;  MANAGER_ID    DEPARTMENT_ID       MAXSALARY  -----------   --------------     -------------  100                90                 7000  100                80                 14000  100                20                 13000  Elapsed: 00:00:19.80SQL>

    The elapsed time is almost 20 seconds for this query.

  5. From a different session,turn on system-wide server query result caching using the following command:
  6. SQL> alter system set result_cache_mode=force;  System altered.  SQL>
  7. Return to the first session to run your test query again,and note the execution time:
  8. SQL> /  ...  Elapsed: 00:00:18.14  SQL>
  9. Issue the query again,now that the query results are cached by the database,and check the timing:
  10. SQL>/  ...  Elapsed: 00:00:00.01  SQL>

Note the dramatic improvement in query timing because of the use of the query result cache,since all the database had to do was merely get the cached result set for the server result cache.

Managing the Query Result Cache

You manage the SQL query result cache through the DBMS_RESULT_CACHE package (we’ve used this package earlier in this section),which provides you statistics that help manage the memory allocation for the query result cache.Here are the key procedures of the DBMS_RESULT_CACHE package:

  • exec dbms_result_cache.flush:Removes all results and clears the cache memory


  • exec dbms_result_cache.invalidate(ABC','TESTTAB'):Invalidates cache results for a specified object.
  • select dbms_result_cache.status from dual:Shows status of the result cache.
  • exec dbms_result_cache.memory_report: Shows result cache memory usage You can also use the following views to find out details about the query result cache:
  • V$RESULT_CACHE_STATISTICS: Shows cache settings and memory usage statistics.
  • V$RESULT_CACHE_OBJECTS: Lists all cached objects and their attributes.
  • V$RESULT_CACHE_DEPENDENCY: Shows the dependency information
    between the cached results and dependencies.
  • V$RESULT_CACHE_MEMORY: Shows all memory blocks and their statistics.

The V$RESULT_CACHE_OBJECTS view shows both cached results and all dependencies.The STATUS column can take the following values:

  • new:The cached result is still being built.
  • published:The cached result is available for use by other queries.
  • bypass:Other queries will bypass the cached result.
  • expired:The cached result has crossed the expiration time limit.
  • invalid:The cached result is unavailable for use by other queries.

Here’s a query that shows the type,status, and name of objects in the result cache:

 SQL> select type,status,name from v$result_cache_objects;     TYPE             STATUS           NAME  ---------------- ------------     ------------------  Dependency         Published       HR.COUNT_EMP  Dependency         Published       HR.EMPLOYEES  Result             Published      "HR"."COUNT_EMP"::8."COUNT                                     EMP"#fac892c7867b54c6 #1  Result             Published       select /*+ result_cache q_name(Q1) */                                    last_name,salary from hr.employees                                    order by salary SQL>

The output of the previous query shows that there are two cached results in the result cache,and that both are available for use.

The following limitations apply with regard to the operation of the query result cache:

  • When dealing with bind variables,a query can reuse a cached result only for identical variable values.Different values for the bind variables or different bind variable names lead to a cache miss.
  • Oracle won’t cache results if the query uses a noncurrent version of the data because of the enforcement of the read consistency principle.
  • Oracle won’t cache results if there are pending transactions on the tables that are part of
    the query.
  • • Oracle won’t cache flashback queries.
    • All queries containing the following are ineligible for query result caching:
    • Nondeterministic PL/SQL functions
    • Currval and nextval
    • SQL functions such as sysdata and current_date
    • Temporary tables
    • Dictionary tables

PL/SQL Function Result Cache

PL/SQL functions sometimes return results of computations performed by queries that involve PL/SQL functions. When you’re dealing with frequently executed PL/SQL functions whose queries access infrequently changing data, the PL/SQL function result cache feature lets you specify that the function’s results be cached.Once you do this, when the data base invokes a function with similar arguments as the cached function,instead of re executing that function, the database returns the cached results of the ear lier execution of that function instead.Of course, performance would be dramatically improved. If any of the tables that are part of the functions queries undergo a DML change, the database purges the PL/SQL result cache.

You can cache the results of PL/SQL functions executed by a session in the SGA and make them available to all other sessions in the database.You don't have to design a cache management policy to take advantage of this new feature—it is completely automatic. Frequently invoked complex PL/SQL functions that operate on relatively fixed data are ideal candidates for caching.

Enabling PL/SQL Result Caching

As we mentioned earlier,both server-side result caches—the SQL query result cache and the PL/SQL result cache—share the same infrastructure.

The database uses the same result cache buffer for both SQL query result caching as well as PL/SQL function result caching.Thus, PL/SQL result caching is automatically enabled in the database, with a default amount of cache size memory (based on the SGA size) as long as you don’t explicitly turn caching off by setting the result_cache_max_size parameter to zero.

An Example

You can activate the PL/SQL function result cache by using the result_cache clause in a PL/SQL function,as shown in the following example:

SQL>create or replace function count_emp (dept_no number) 2 return number 3 result_cache relies on(emp) 4 is 5 emp_ct number; 6 begin 7 select count(*) into emp_ct from emp 8 where department_id=dept_no; 9 return emp_ct; 10 end; 11 / Function created.SQL>

The previous code enables result caching on the function count_emp. The relies_on clause is optional,and you use it to indicate that the function relies or depends on the table(emp in this example) for its results.You can use the relies_on clause to specify tables or views on which the function depends.You can also specify a set of tables or views with the relies_on clause.

The clause ensures that the database purges (invalidates) the server result cache when the dependent table undergoes a DML operation. This automatic purging of the results maintains cache consistency.

Once you create a function that populates the result cache using the result_cache option,it’s time to test the function.To do this,you must call the PL/SQL function from inside a SQL query,as shown here:

 SQL> select dept_name, count_emp(dept_id) "employee count"  from depts  where dept_name='Accounting';  DEPARTMENT_NAME       EMPLOYEE_COUNT  ----------------- -------------------------  Accounting                  2 SQL>

When you issue the previous SQL query for the first time,it invokes the PL/SQL function and stores the results it retrieves in the result cache.When you subsequently execute the same query,it’ll simply retrieve the results directly from the result cache.You can examine the V$RESULT_CACHE_STATISTICS view,as in the case of the SQL query cache,to view the usage statistics for the PL/SQL result cache.All columns such as create count success and find count hold the same meaning for the PL/SQL result cache, as they did for the SQL query result cache.

If you apply a code patch to a PL/SQL unit such as a procedure, then the results in the PL/SQL result cache won’t reflect the correct results for that PL/SQL unit, unless you bounce the instance. In cases such as these, you can bypass the result cache, as shown here:

  1. Place the result cache in bypass mode:
  2. SQL> exec dbms_result_cache.bypass(true)PL/SQL procedure successfully completed.
  3. Flush the existing cached results:
    SQL> exec dbms_result_cache.flushPL/SQL procedure successfully completed.
  4. Apply the PL/SQL code patch.
  5. Turn off the PL/SQL result cache bypass mode:
  6. SQL> exec dbms_result_cache.bypass(false)PL/SQL procedure successfully completed.

When you turn result cache bypassing on, the database doesn’t use any cached results,and no results are stored there until you turn bypass mode off. Please refer to Chapter 11 for a more programmatic discussion of the PL/SQL cross-section result cache, as well as a listing of the limitations of the cache.

OCI Consistent Client Cache

The OCI consistent client cache,also called simply the client cache, extends server-side query caching by letting the client machines leverage their memory to cache query results.

The client cache resides in the OCI client process memory and can store tables or entire data sets,usually on the application server.Instead of having the server repeatedly execute frequently used SQL queries,the client simply gets the results from the client cache directly, saving unnecessary round-trips between the client and the server. Increased scalability results from the lower CPU usage.Oracle’s bench mark studies show that for simple queries such as the ones that use lookup tables,involving usually a read but not a write,performance gains could be dramatic, as high as a 500 percent reduction in elapsed time and a 200 percent drop in CPU time.

Queries from different client sessions that have a match in things such as their SQL text and bind values share the cached query result sets.This leads to a significant reduction in response time for frequently executed SQL queries as well as an increase in scalability.

The best aspect of the client cache feature is that you don’t have to make any changes to your application code to take advantage of it.Any application that uses an Oracle Database 11g OCI client can avail of the client cache.The OCI clients include ODBC, ODB.NET, PHP, JDBC-OCI (Thick) Driver, and various Oracle precompilers.

The OCI client cache is particularly suitable for queries involving lookup tables that produce repeatable result sets or small static result sets.This type of caching is also ideal for frequently executed queries.

Once you enable client result caching,the result can be cached on the client or on the server.When you think the results aren’t consistent any longer or simply incorrect,you can disable the client caching feature.

The Mechanics of Query Result Set Caching

You use the following two new initialization parameters to configure the client result cache

  • client_result_cache_size:You can turn on client result caching by setting this parameter to a value greater than zero.The parameter sets the maximum size of the client per-process result cache size in bytes.By default, the para meter has a value of zero,meaning the client cache is disabled by default (in contrast,the server cache is enabled by default).The value of this para meter (in bytes) sets the limit on the result set cache for each client process.The database assigns all OCI client processes a result cache that is equal to the value of this parameter.The client_result_cache_size parameter ranges from a minimum value of 0 to a maximum value that’s dependent on the operatingsystem. You must restart the database after setting the initial value for this para meter or after making an adjustment, since it’s a static initialization par ameter.You can override the client_result_cache_size parameter with the client configuration parameter oci_result_cache_max_size.
  • client_result_cache_lag:This parameter sets the maximum time since the latest round trip to the server after which the OCI client query makes a fresh round-trip to get new data for the cached queries on the client.

You can also use an optional client configuration file(or make the settings part of the sqlnet.ora file on the client) to enable and disable OCI client caching for OCI client processes that use the file.

The settings in the client configuration file override the client cache settingsn made through the server parameter file on the server. Any OCI client process that uses the configuration file will have OCI client caching enabled for it.You can set the following optional parameters for clients in the client configuration file.

  • oci_result_cache_max_size: Sets the maximum size of the query cache for a process(in bytes).This parameter overrides the value set for the client_result_cache_size initialization parameter on the server.
  • oci_result_cache_max_rset_size: Sets the maximum size (in bytes) of a single query result in the query cache for a process.
  • oci_result_cache_max_rset_rows: Sets the maximum size of a query result set(in rows)for a process.

Enabling and Disabling Client Caching

OCI applications enable and disable client result caching by using OCIStmt Execute() mode with the mode values.

OCI_RESULT_CACHE and OCI_NO_RESULT_CACHE to override the SQL hints no_result_cache and result_cache.

Intra Unit Inlining

Intra unit inlining,also called procedure inlining when applied to PL/SQL procedures,is another Oracle Database 11g new feature that promises tremendous gains in PL/SQL performance.Oracle claims that by using this new feature appropriately,you could achieve performance gains by a factor of two to ten times. Intra unit inlining is a process that replaces a call to a procedure, function, sub routine, or method with a copy of the body of that object, which always runs much faster than the actual procedure, function, or subroutine.

The database automatically finds candidates for inlining,but you can influence the process with the initialization parameter plsql_optimization_level and the pragma inline directive.Oracle suggests you inline only small, frequently run procedures.The PL/SQL performance tool plstimer can help you identify procedures that may benefit from inlining.

By setting the initialization parameter plsql_optimize_level to a value of 3, you can make the PL/SQL complier search for PL/SQL calls that are candidates for inlining and inline the most promising of those calls.You can also specify the plsql_optimization_level parameter dynamically inside a procedure,as shown here:

 SQL> alter procedure test compile  plsql_optimize_level = 3 reuse settings;

You can also use the pragma inline directive inside a sub routine to direct the data base to inline the subroutine.

Automatic SQL Tuning

Although a DBA could manually tune poorly performing SQL statements, the large number of distinct SQL statements in any real-life production database and the expertise it requires to write good SQL code, design fast data access, and work with explain plans and trace output makes manual tuning quite a daunting task, besides being extremely time consuming.

Oracle Database 10g gave you two excellent performance tuning tools, the SQL Tuning Advisor and the Automatic Database Diagnostic Monitor (ADDM),to facil itate tuning SQL queries.As candidates for tuning,ADDM picks high-load SQL statements that provide the biggest bang for the buck in the form of improved performance.You must look at the ADDM reports and then run the SQL Tuning Advisor to receive recommendations for improving performance. The SQL Tuning Advisor makes recommendations to improve problem SQL queries,due either to a poorly designed query or to a query that wasn’t making use of optimal statistics.The SQL Tuning Advisor’s recommendations might include gathering statistics, adding indexes, creating a SQL profile, or modi fying the SQL statements.

Even when you got recommendations from the SQL Tuning Advisor that saved you the rigors of performing a manual SQL tuning, in Oracle Database 10g you were still in charge of evaluating and then actually implementing the recommen dations made by the SQL Tuning Advisor.In Oracle Database 11g, Oracle goes much further in automating the entire SQL tuning process, as we explain in the next section.

Automation of SQL Tuning in Oracle Database 11g

In Oracle Data base 11g,the new Automatic SQL Tuning Advisor task runs nightly by default. As is the case with the other two automated management tasks—the Optimizer Statistics Gathering and Automatic Segment Advisor tasks—the Automatic SQL Tuning Advisor task is enabled by default when you create a database or migrate an Oracle 10g database to the 11g release. The new automatic task, named the Automatic SQL Tuning Advisor task(sys_ auto_ sql_ tuning_  task), runs automatically every night, as part of the automatic maintenance task framework, looking for ways to speed up the perfor mance of high-load SQL statements by improving their execution plans.

The Automatic SQL Tuning Advisor task does the following things:

  • Identifies inefficient SQL statements
  • Runs the SQL Tuning Advisor to get recommendations for improving the problem statements
  • Tunes problem statements by implementing the SQL profile
    recommendations made by the SQL Tuning Advisor

In the following sections,we’ll cover how the new automatic SQL tuning feature works in Oracle Database 11g.

Identifying Candidates for SQL Tuning

The Automatic SQL Tuning Advisor job runs every night during the maintenance window of the automated maintenance tasks feature.You don’t need to provide any SQL tuning sets for the job, because the tuning task selects the workload based on its analysis of database performance. The Automatic SQL Tuning Advisor task depends on the AWR for the selection of candidates for further analysis. Automatic SQL tuning pulls SQL from the AWR that was at the top of the list of SQL queries that performed poorly during four different time periods, which it calls buckets. These four buckets are based on weekly, daily, hourly, or top SQL in a single SQL execution based on response time. Automatic SQL tuning combines the top queries from the four buckets into one by assigning weights.

The database considers the following types of queries ineligible for automatic SQL tuning, although you can still tune them (except ad hoc SQL queries) manually using the SQL Tuning Advisor:

  • Parallel queries
  • Ad hoc SQL
  • Any SQL statement that wasn’t executed more than once during the past week
  • DDL statements
  • Queries that are still at the top of the poorly performing query list, even after profiling
  • Recursive SQL statements

Running the Automatic SQL Tuning Advisor

The first step in automatic SQL tuning is to find the bad SQL statements to tune.Once the Automatic SQL Tuning Advisor task generates a list of candidate SQL statements,the advisor orders the candidate SQL list in the order of importance. The SQL Tuning Advisor then tunes each of the candidate SQL statements in the order of importance and recommends SQL profiles to improve the performance of the SQL statements.

Implementing the Recommendations

If the SQL Tuning Advisor finds an existing SQL profile,it checks the optimizer statistics to ensure they do exist and that they are up-to-date.If not,it lets the gather_stats_task job know about it so it can collect the necessary statistics.

The automatic SQL tuning process may make other recommendations besides the adoption of a SQL profile,such as creating new indexes, refreshing statistics,or even restructuring the SQL state ments.However,you must review and manually implement these other recommendations yourself,as in Oracle Database 10g.The data base implements only the SQL profiles automatically.

An interesting aspect here is that the database doesn’t automatically implement all SQL profiles that the SQL Tuning Advisor generates during the tuning process.It implements only those SQL profile recommendations that will improve performance by at least threefold.To be more precise, here are the requirements a SQL profile must satisfy before the database automatically implements it:

  • There should be a threefold improvement in the sum of the CPU time and IO time.
  • Neither CPU time nor IO time must deteriorate because of the new SQL profile.

The reason for the previous set of requirements is to ensure that any contention caused by the new SQL profile doesn’t make performance actually worse than running the query without the SQL profile. The automatic SQL tuning process tests SQL statements with and without the SQL profile before making the recommendation to accept the new profile.

The automatic SQL tuning process doesn’t modify the offending SQL statement in any way,and you can reverse its actions any time by removing the new profiles that it implemented.

Managing Automatic SQL Tuning

Managing the automatic SQL tuning feature involves configuring the automatic SQL tuning feature on and off, and second, managing the scheduling of the task during the maintenance window.You also need to configure the parameters of the Automatic SQL Tuning Advisor task. These task parameters include actions such as setting time limits on the tuning tasks, turning SQL profile implementation on and off, and even disabling the testing of new SQL profiles, if the time to implement the changes is a major concern.

Like most components of the advisor framework,you can tune automatic SQL tuning through Enterprise Manager Database Control (or Grid Control) or through Oracle-supplied PL/SQL packages.Next we’ll cover how you can manage automatic SQL tuning through Enterprise Manager Database Control.

Using Database Control

The easiest way to manage most components of the automatic SQL management framework is by using Database Control.Follow these steps to manage automatic SQL tasks through Database Control:

  1. Click the Server tab on the home page of Database Control.
  2. On the Server page, click the Automated Maintenance Tasks link in the Oracle Scheduler section.
  3. On the Automated Maintenance Tasks page, click the Automatic SQL Tuning link.
  4. You’ll now be on the Automatic SQL Tuning Result Summary page.To turn automatic SQL tuning on and off, go to the Configure Automatic SQL Tuning page by clicking the Configure button in the Task Status section of this page. If you want, you can enable the feature only for some days of the week.
  5. Click the View Report link in the Task Activity Summary section to view the Automatic SQL Tuning Result Details page,shown in Figure.Here, you can view all recommendations made by the SQL Tuning Advisor,and you can choose to implement some or all of the recommendations from this page. At the bottom of the same page, you can view a histogram showing the pre-change response time and another showing the tuning benefit.

The Automatic SQL Tuning Result Details page

Using the DBMS_AUTO_TASK_ADMIN Package

Use the DBMS_AUTO_TASK_ADMIN package to enable and disable the Automatic SQL Tuning Advisor task and to control various aspects of executing the SQL Tuning Advisor task. Here’s a summary of the main procedures you can use to control the execution of the Automatic SQL Tuning Advisor task:

  • enable: Lets you enable a previously disabled operation,client, target type or target to be enabled again.
  • disable: Lets you prevent executing requests from a specific client or operation.
  • get_client_attributes: Returns select client attribute values.
  • get_p1_resoruces: Returns the percentage of resources allocated to each automatic maintenance task’s High Priority Consumer group.
  • set_p1_resources: Sets the resource allocation for each High Priority Consumer group used by the automatic maintenance task clients.
  • override_priority: Lets you override task priorities.

SQL Access Advisor Enhancements

Oracle introduced the SQL Access Advisor in the Oracle Database 10g release to help you determine efficient access structures such as indexes, materialized views, and materialized view logs. The SQL Access Advisor either takes the actual database workload (SQL cache) or derives a hypothetical workload or a SQL tuning set as the basis for making recommendations for better execution of SQL statements.You can run the advisor in limited mode or in comprehensive mode, as was the case with this advisor in Oracle Database 10g.

In Oracle Database 10g,the SQL Access Advisor provided recommendations regarding indexes, materialized views, and materialized view logs.New in Oracle Database 11g, the advisor will also recommend partitions for tables, indexes, and materialized views. The partition enhancements are part of the Partition Advisor, which is inte grated into the SQL Access Advisor.

The SQL Access Advisor now shows you the expected gains of implementing its recommendations.In addition, the advisor also reports intermediate results by breaking up the workload into parts.These intermediate results are called publish points.

Partitioning Recommendations

In addition to the normal recommendations to create or drop indexes, materialized views,and materialized view logs,you’ll now also get recommendations to partition existing tables and indexes. The advisor may also recommend adding a partitioned index on a table or a materialized view or adding a new partitioned materialized view.In some case, the advisor may recommend replacing the current table partitioning with a more efficient partitioning scheme.

The SQL Access Advisor may recommend range, interval, list, hash, range-hash, and range-list partitioning schemes for base tables and indexes.It may recomm end local, range, and hash partitioning schemas for partitioning indexes.

To implement its partition recommendations,the SQL Access Advisor generates an implementation script,which may invoke an external package such as the DBMS_REDEFINITION package, and partitions the base table online.If the table that the advisor is going to partition has any bitmap indexes defined on it, the DBMS_REDEFINITION package may fail.In this case, you must drop the bitmap indexes before the advisor implements its partitioning recommendations and create the indexes after the table partitioning is completed

Here are some key things to note about the partition recommendations:

  • If the partitioning recommendation is one of several other recommen dations such as creating indexes beside the partitioning recommendation, you can’t implement the other recommendations only and decide not to part ition the table or index.This is because the other recommendations are contin gent upon the implementation of the partitioning recommendations.
  • The SQL Access Advisor recommends only single-column interval and hash partitioning, with interval being the default.Hash partitioning is designed to take advantage of partition-wise joins.
  • The advisor recommends partitions only for tables that have at least 10,000 rows.
  • The partitioning recommendations may take a considerable amount of time to complete, if the table is large.
  • Partitioning recommendations are limited to workloads that contain predic ates or join table columns of type NUMBER or DATE only.
  • You’ll need additional storage if the DBMS_REDEFINITION package is invoked, since that package creates a temporary copy of the table during the online reorganization of a table.

Accessing Intermediate Results

As you are probably aware, the SQL Access Advisor takes a considerable amount of time to execute, especially when you run it in comprehensive mode.In Oracle Database 11g, you can access the intermediate results while the advisor is still executing.This gives you the opportunity to cut short an advisor run if the intermediate results aren’t appealing to you. Similarly, if the intermediate results show evidence that the recommendations will lead to a significant benefit, you can cut short the advisor run by interrupting it midstream.

The SQL Access Advisor bases its partitioning recommendations on its analysis of a significant proportion of the workload.Thus, if you interrupt an advisor task early in the game, the advisor may not include any base table partitioning recommendations in the intermediate results.If you decide to interrupt, wait for a substantial portion of the workload to complete so you can get the partitioning recommendations as well.

To accept the intermediate results, you must formally interrupt the advisor task.The advisor will then stop executing the task and mark it as interrupted. Following this, you can either ask the advisor to complete the recommendation process or choose to accept the recommendations and generate scripts for implementing them.

New DBMS_ADVISOR Procedures

Two new procedures in the DBMS_ADVISOR package have a bearing on the SQL Access Advisor. These are the add_sts_ref and copy_sqlwkld_to_sts procedures.

ADD_STS_REF Procedure

You can use the add_sts_ref procedure to link a SQL Access Advisor task to the SQL tuning set.The advisor task can use this link to access the SQL tuning set for analysis purposes. Here’s the structure of the add_sts_ref procedure:

 Argument Name              Type      In/Out    Default? -------------------- -------------- ---------  ---------  TASK_NAME              VARCHAR2       IN  STS_OWNER              VARCHAR2       IN WORKLOAD_NAME           VARCHAR2       IN

Here’s an example showing how to use the add_sts_ref procedure to link a task (task1) to a SQL tuning set (sts_1):

SQL> exec dbms_advisor.add_sts_ref ("task1','hr', 'sts_1');

Once you create a link between the advisor task and the SQL Tuning Advisor task,the STS can’t be altered or deleted, thus providing a stable view of the workload for the advisor.


The copy_sqlwkld_to_sts procedure transfers the contents of a SQL workload object to an STS.The copy_sqlwkld_to_sts procedure has the following attributes:

 Argument Name            Type        In/Out    Default?  ----------------- --------------- ---------- ------------  WORKLOAD_NAME         VARCHAR2         IN  STS_NAME              VARCHAR2         IN IMPORT_MODE            VARCHAR2         IN       DEFAULT

The workload_name attribute refers to the source SQL workload,and the sts_name attribute points to the target STS into which you’ll be importing the SQL Workload object.

The import_mode attribute can take the value append, new, or replace and determines the handling of the data in the target STS upon the import of the SQL Workload object.Here’s an example showing how to use the copy_sqlwkload_to_sts procedure:

 SQL>exec dbms_advisor.copy_sqlwkld_to_sts ('workload_1', 'sts_1','new');

Oracle will create a new STS if the STS you specify doesn’t exist.

Optimizer and Statistics Collection Enhancements

Oracle Database 11g provides several enhancements in the optimizer statistics collection area.One of the important new features is the extended statistics feature,which lets the optimizer collect statistics on correlated columns in a table,thus providing better selectivity estimates. You can also collect expression statistics on expression functions.

The statistics that you can collect for column groups and expressions are called extended stati stics.The other important enhancement is the new concept of private and public statistics,with private statistics being statistics that you’ve collected but not yet published for use by the cost optimizer.We’ll cover the interesting new features in the following sections.

Multicolumn Statistics

One of the key things the cost optimizer considers during its execution plan evaluation is the selectivity (or cardinality) of the column in the where clause of a query. The number of different values in a column determines the column’s selectiv ity.When a column in a table has many different values,that column has good selectivity, and a column with many values that are the same has poor selectivity. If you build an index on a column with good selectivity, the optimizer is able to access the necessary row faster than it will access a row with lower selectivity.

If a column has poor selectivity—for example,a certain value that’s the same in more than 95 percent of the table’s rows—it’s likely that the cost optimizer will opt for a full table scan,regardless of the existence of an index on that column.If,on the other hand,there are several distinct column values such that no column value is in more than 1 or 2 percent of the table’s rows,the optimizer is likely to choose the index for accessing the rows when you issue a query that contains a where clause containing this column.

Several queries,however,contain multiple columns from a table,each with a different degree of selectivity.Some of these columns are related,but the optimizer doesn’t know about the nature of these relationships.For example,in a table containing products and their prices,some products have higher prices than others.However,the cost optimizer, which bases its calculations on the statistics it gathers on individual columns,doesn’t know of this relation ship.The optimizer can’t estimate the combined selectivi ty of multiple columns when there is skewed data—that is,data where the differ ent values in a column aren’t evenly distributed among all rows but rather are bunched together For the optimizer to estimate the true cardinality of the data, it must know whether the addition of another column to a given column would reduce the result set. Correlated statistics on multiple columns could provide significantly better cardinality estimates than single column statis tics or histograms.Adding an additional predicate could reduce the result set when the two columns are strongly correlated.

Oracle Database 11g introduces extended statistics (also called multicolumn statistics) wherein you can collect statistics on a set of columns together, thus enabling the optimizer to compute the selectivity of multiple single-column predicates accurately.Since closely related columns can affect the combined selectivity for the columns taken as a group,gathering statistics on the related columns as a group(column group) means the optimizer will have a more correct estimate of the selectivity for the group of columns as a whole,which is your real focus in queries involving predicates using the related columns. This innovation could mean that a query that used a full table scan in prior releases may now use an index scan, making the query run much faster.

The following example makes the concept of multicolumn statistics clearer. Let’s take the custo mers table in the sh schema,which has two related columns,cust_state_province and country_id.You get the following results when you query the customers table:

 SQL> select count(*)  from customers  where cust_state_province = 'CA';  COuNT(*)  -------------  3341 SQL>

If you add the country_id column to the query, the results are still the same:

 SQL> select count(*)  from customers  where cust_state_province = 'CA'  and country_id = 52790;  COUNT(*)  -------------  3341 SQL>

The COUNTRY_ID 52790 is for the United States. If the COUNTRY_ID column has a different value, such as 52775, the results are quite different:

 SQL> select count(*)  from customers  where cust_state_province = 'CA';  COUNT(*)  -------------  0

The optimizer doesn’t have any idea about the relationship between the CUST_TATE_PROVINCE and COUNTRY_ID columns.However,if you gather statistics on the two columns as a group,the optimizer has a far better estimate of the selectivity for the group,instead of relying on generating the selectivity estimate for the two columns separately from the statistics on the individual columns.

Creating Multicolumn Statistics

Although the database creates column groups by default by analyzing the workload in the data base,you can create your own column groups by using the DBMS_STATS package. Here’s the syntax of the create_extended_stats function, which lets you create a column group:

 FUNCTION CREATE_EXTENDED_STATS RETURNS VARCHAR2  Argument Name       Type        In/Out    Default?  -------------- --------------- --------- -----------  OWNNAME            VARCHAR2        IN  TABNAME            VARCHAR2        IN EXTENSION           VARCHAR2        IN

The ownname argument stands for the schema owner’s name,and if you’re using the current schema, you can specify null as the value for this attribute.The tabname attribute is for the table name,and the extension attribute is where you list the related columns that are going to be part of your new column group.

The following example shows how to create a column group named group1 consisting of two related columns—CUST_STATE_PROVINCE and COUNTRY_ID —from the table orders:

 SQL> declare  group1 varchar2(30);  begin  group1:=dbms_stats.create_extended_stats(null,'customers','  (cust_state_province,country_id)');  end;  SQL> /  PL/SQL procedure successfully completed. SQL>

Executing this PL/SQL procedure creates a new column group group1 consist ing of the STATE_PROVINCE and COUNTRY_ID columns and adds that group to the customers table.You can check what extensions providing multicolumn statistics exist in the database by issuing the following query:

 SQL> select extension_name, extension  from dba_stat_extensions  where table_name='CUSTOMERS';   EXTENSION_NAME                    EXTENSION  -----------------------------    ------------------------------------  SYS_STU#S#WF25Z#QAHIHE#MOFFMM   ("CUST_STATE_PROVINCE","COUNTRY_ID")  SQL>

The show_extended_stats function returns the name of the column group you created as a virtual column,with a database generated column name.Here’s the query:

 SQL> select  sys.dbms_stats.show_extendedstats_name('sh','customers',  '(cust_state_province,country id)') col_group_name from dual);  COL_GROUP_NAME  ----------------------------------  SYS_STU#S#WF25Z#QAHIHE#MOFFMM SQL>

Utilizing the virtual column SYS_STU#S#WF25Z#QAHIHE#MOFFMM, which  represents the true nature of the correlation between the CUST_STATE_PROVINCE and the COUNTRY_ID columns,the cost optimizer now has the correct estimate of the true selectivity of the two correlated columns.A virtual column,as its name suggests,isn’t physically part of the table,but the optimizer is aware of it when it’s evaluating different execution plans.

Gathering Statistics for Column Groups

When you know that some columns of a table are correlated in a skewed manner,you can let the optimizer know that the data is skewed so it can compute better selectivity estimates.The way to do this is by gathering histograms on the skewed column group,as shown in the following example:

SQL> exec sys.dbms_stats.gather_table_stats(null,'customers',- method_opt => 'for all columns size skewonly')

Earlier you learned how to create a new column group using the create_extended_stats procedure. You can automatically create a new column group as part of statistics gathering as well,as shown here:

 SQL> exec sys.dbms_stats.gather_table_stats(null,'customers',  method_opt => 'for all columns size skewonly  for columns (cust_state_province,country_id) skewonly')

You specify the new column group you want to create by using the forcolumns clause.The procedure will both create the new column group as well as gather statistics on that group in a single step.

Deleting Extended Statistics

You can remove any extended statistics you created by executing the drop_ extended_statistics procedure, as shown in the following example:

 SQL> exec dbms_stats.drop_extended_stats(null,'customers',- '(country, state)');  PL/SQL procedure successfully completed. SQL>

You can use the drop_extended_statistics procedure for removing any extended statistics that you created for a function on a column.

Expression Statistics for Functions and Expressions

Another area where the cost optimizer has trouble making true selectivity estimates is for columns to which you apply a function or an expression.For example,it’s common to use the upper and lower functions when using columns such as LAST_NAME and FIRST_NAME in a query expression,as shown in this example.

 SQL> select count(*)  from employees where lower(state) ='tx';

Because of the application of a function to the state column,the optimizer can’t get accurate estimates of the selectivity of that column.In Oracle Database 11g, you let the optimizer get better selectivity estimates by gathering expression statistics on the expression function.

Suppose that the users input a value for the last name of the customers in uppercase.You can then create extended statistics on the expression UPPER (last_name) so the optimizer can get the correct count of the number of rows:

 SQL> exec dbms_stats.create_extended_stats - (null,' customers','(lower(cust_state_province))')

As with the column groups,you can both create new extension statistics and gather statistics at the same time by using the DBMS_STATS package, as shown in this example:

 SQL> exec dbms_stats.gather_table_stats (null,'customers_obe',-> method_opt => 'for all columns size skewonly for –> columns (upper(last_name)) -> skewonly');

You can monitor expression statistics with the DBA_STAT_EXTENSION view,and you can drop the extension statistics from a table with the drop_extended_statistics function of the DBMS_STATS package.

Changing Statistics Preferences

In Oracle Database 10g, you used the DBMS_STATS package’s GET_PARAM function to return the default values of parameters for several procedures that belong to the DBMS_STATS package. The GET_PARAM procedure is obsolete in Oracle Database 11g. Instead, there is a more powerful new function called GET_PREFS, which will get you the default values of various preferences.

The GET_PREFS function has parameters such as cascade, degree, estimate_ percent, granularity, publish, and stale_percent, each of which is called a preference.Here’s an example of how you can use the get_prefsfunction to get the preferences for a certain parameter, in this case the stale_percent parameter:

 SQL> select dbms_stats.get_prefs('stale_percent',  'sh','product_descriptions')  stale_percent from dual;  STALE_PERCENT  ------------------------   20

If you provide the schema, the table name,and a preference, the previous function will return the values specified for that table.If you don’t specify any values for that table,the function will return the global preference,if you specified one.If you don’t specify either a global preference or a table specific preference, the function will return the default values for that preference.

Setting Preferences

The set_param procedure that enabled you to set default parameter values for various DBMS_STAT procedures in the previous version of the Oracle data base is now obsolete.Instead, there is a new procedure,set_global_prefs, that lets you set global statistics preferences for the various parameters.You need the sysdba privilege to set global preferences using the set_global_prefs procedure.

The following example shows how to use the set_global_prefs procedure:

 SQL>exec dbms_stats.set_global_prefs('estimate_percent','70');  PL/SQL procedure successfully completed. SQL>

Confirm the change in the estimate_percent parameter’s value in the table product_descriptions by using the following query:

 SQL> select dbms_stats.get_prefs ('estimate_percent', 'oe',  'product_descriptions') estimate_percent from dual;  ESTIMATE_PERCENT  ------------------------------  70 SQL>

The set_global_prefs procedure will set global preferences for all parameters you can set for the various procedures of the DBMS_STATS package.You can also set the preferences at the database level (set_database_prefs), schema level(set_schema_prefs), and individual table level (set_table_prefs).

Removing Statistics Preferences

You can use new procedures of the DBMS_STATS package to delete current statistics preferences.Here are the different statistics preferences deletion procedures:

  • delete_table_prefs:Deletes statistics preferences for a specific table.
  • delete_schema_prefs:Deletes statistics preferences for a specific schema.
  • delete_database_prefs:Deletes statistics preferences for all tables in the database,other than those owned by Oracle.

Exporting and Importing Statistics Preferences

You can also export statistics preferences at the database (export_data base_prefs), scheme(export_schema_prefs),or table (export_table_prefs) level. The following example exports statistics preferences at the database level:

SQL> exec dbms_stats.export_database_prefs('hrtab',statown=>'hr');

The previous procedure will export the statistics preferences of all tables (except those owned by SYS) to the table hrtab. Similarly, you can import statistics preferences from a specified statistics table where you first exported them at the database level (import_database_prefs), schema level (import_schema_prefs), or table level (import_table_prefs).

Keeping Statistics Pending vs.Publishing Statistics Immediately

In prior database versions, once the DBMS_STATS procedure (or the analyze command) collecte statistics,the statistics were immediately offered for use by the cost optimizer.In other words,the publishing of the statistics was automatic. This is still the default behavior in Oracle Data base 11g,with the important distinction that you can now opt not to publish the statistics automa tically,thus giving you an opportunity to test the new statistics to see whether they actually improve query performance.You can view the default behavior regarding the publishing of statistics by using the get_prefs function as shown here:

 SQL> select dbms_stats.get_prefs('publish') from dual;  DBMS_STATS.GET_PREFS('PUBLISH')  -------------------------------------------------  TRUE SQL>

In Oracle Database 11g, there is a distinction between public and private statistics, the former being statistics that have been published for use by the cost-based optimizer in its query optimization process and the latter being statistics that have been gathered but aren’t yet published, that is, not made available to the cost-based optimizer. You can choose to collect private statistics at the database level or at the table level, thus offering you great flexibility in how you use this powerful new feature.By default, statistics that the database collects are stored in the data dictionary. The database will store private statistics that you collect in a private area pending their publishing, thus making them unavailable to the cost-based optimizer, which can use only those statistics stored in the data dictionary.

Pending Statistics

How do you test the usage of private statistics by the cost-optimizer? The statistics you categorize as private have a pending status at first, before they’re published.This is the reason why the private statistics are also called pending statistics. After verifying the query performance,you can opt to publish the pending statistics by running the publish_pending_stats procedure in the DBMS_STATS package.If the query performance isn’t acceptable, you can delete the pending statistics by running the delete_pending_stats procedure instead.

Making Pending Statistics Available to the Optimizer

Once you test the impact of the new statistics, you can decide whether to publish them.You can make the private statistics available to the cost-based optimizer by setting the initialization parameter optimizer_use_private_statistics to true. The default value of this parameter is false meaning the cost optimizer ignores pending statistics by default.Once you change the parameter setting to true,the optimizer will take into account the pending statistics in preference to the published statistics it has for a table or schema.

In the following example, we use the set_table_prefs procedure we explained earlier in this chapter in order to specify that a certain table’s statistics not be published automatically.In other words,we are directing the database to collect private instead of public statistics for this table,and these statis tics will be in pending mode until you decide on their acceptance.In the example,we turn off automatic publishing of statistics for the table product_descriptions,owned by the user oe. Here’s the example:

 SQL> exec dbms_stats.set_table_prefs ('oe','  product_descriptions','publish','false');  PL/SQL procedure successfully completed. SQL>

You can also change the publish settings at the schema level instead of the table level.The set_table_prefs procedure we execute here doesn’t set off an immediate statistics collection, private or public, for the specified table. It stipulates only that when the database collects statistics the next time for the product_descriptions table,it shouldn’t immediately publish the statistics.Instead of publishing the statistics that it gathered for the table. product_descriptions, the database will store them in the DBA_TAB_PENDING_STATS table.

You can confirm that the product_descriptions statistics will be private (pending) by issuing the following select statement again:

 SQL> select  dbms_stats.get_prefs('publish','oe','product_descriptions')  from dual;  DBMS_STATS.GET_PREFS('PUBLISH','OE','PRODUCT_DESCRIPTIONS')  -----------------------------------------------------------  FALSE SQL>

The value of false in the previous query means that the statistics for the OE table won’t be automatically published but kept private (pending).

Collecting Private Statistics

First,execute the delete_table_statistics procedure to delete any existing statistics for the table product_descriptions.Here’s the code:

 SQL> exec dbms_stats.delete_tables_stats  ('oe','product_descriptions');  PL/SQL procedure successfully completed. SQL>

Once you get rid of all the statistics on the table product_descriptions, collect fresh statistics for the table by issuing the following command:

 SQL> exec dbms_stats.gather_table_stats('oe','product_descriptions');  PL/SQL procedure successfully completed. SQL>

The database won’t publish the statistics you collected by executing the previous gather_ table_stats procedure,since you earlier set the publish pre ference to false.You can verify first that these statistics aren’t public by querying the USER_TABLES view in the following manner:

 SQL> select table_name,last_analyzed, num_rows from  user_tables  where table_name='PRODUCT_DESCRIPTIONS';  TABLE_NAME        LAST_ANALYZED     NUM_ROWS  -------------- ----------------- ------------------ SQL>

The query on the USER_TABLES view shows public statistics only, and thus there is nothing showing that the database actually collected any statistics for the product_descriptions table.However,if you query the USER_TAB_PENDING_ STATS view,which holds information about private statistics,you’ll see some thing else:

 SQL> select table_name,last_analyzed, num_rows from  user_tab_pending_stats  where table_name='PRODUCT_DESCRIPTIONS';  TABLE_NAME              LAST_ANALYZED        NUM_ROWS  --------------------   --------------      -----------------  PRODUCT_DESCRIPTIONS       29-MAY-07          8634  SQL>

The USER_TAB_PENDING_STATS view shows pending statistics for tables, partitions,and subpartitions owned by the user issuing the command.

Testing,Publishing, and Deleting Pending Statistics

You can use the export_table_statistics procedure to export the pending statistics to a different data base and test them there.If the query results aren’t encouraging with the pending statistics,you can delete pending stati stics by executing the delete_pending_stats procedure, as shown here:

 SQL> exec dbms_stats.delete_pending_stats('HR','EMPLOYEES');

You can choose to make pending statistics permanent,that is, publish them,by using the publish_pending_statistics procedure, as shown in this example:

 SQL> exec dbms_stats.publish_pending_stats('HR','EMPLOYEES');

If you want to publish all statistics collected for a particular schema,you can do so by execut ing this procedure:

 SQL> exec dbms_stats.publish_pending_stats('HR',null);

You can publish all pending statistics in the database by executing the following procedure:

 SQL> exec dbms_stats.publish_pending_stats(null, null);

By default,private statistics are turned off, as shown by the default setting of false for the new Oracle Database 11g initialization parameter

SQL Plan Management (SPM)

It’s not uncommon at all for DBAs to confront situations where a well-functioning system encounters performance problems because of a regression in the execution plans of SQL statements. Execution plans for SQL statements could change for a number of reasons,including the upgrade of the Oracle database or the operating system software, changes in optimizer versions, optimizerstatis tics or optimizer parameters, data changes, schema definition and metadata changes, deployment of new application modules, adding and dropping indexes, and so on. In earlier versions,Oracle provided features such as stored outlines and SQL profiles to enable the stability of execution plans over time. However,all these required you, the DBA, to decide to use and implement the plan stability features.

Oracle Database 11g introduces a new plan stabilization feature called SQL Plan Management that lets the database control the evolution of SQL execution plans through SQL plan baselines.The goal of SPM is to preserve the per formance of your SQL code in the face of changes such as database upgrades, system and data changes, and application upgrades and bug fixes. You can currently use SQL profiles through Oracle’s automatic SQL tuning feature to produce welltuned execution plans, but that is a purely reactive mechanism,which doesn’t help you when there are drastic changes in your system. Oracle has designed SPM as a preventative mechanism that aims to preserve the per formance of SQL statements in the face of numerous types of changes that we listed earlier.Whereas a SQL profile contains statistics that will improve the execution plan for a SQL statement,a SQL plan baseline contains hints for the optimizer to generate a better execution plan for a SQL statement.

Once you enable this new plan stabilization feature,new execution plans generated by changes in factors such as optimizer statistics,for example,are accepted only if they don’t lead to a deterioration in performance. The cost-based optimizer manages the SQL plan base lines and uses only known and verified plans. After verification, the optimizer uses only com parable plans or plans that yield better per formance than the current execu tion plans.SPM builds SQL plan baselines of known good execution plans.SQL Plan Management aims to replace the plan stability feature in earlier releases of the Oracle database. The older stored outlines feature is deprecated in this release,and Oracle recommends that you migrate your stored outlines to SQL plan baselines.

The database stores an execution plan as a SQL baseline,thus maintaining the plan for a set of SQL statements.Provided the database has the necessary optimizer statistics, the optimizer first evolves a best-cost plan and then tries to match it with a plan in the SQL plan baseline. If the optimizer finds a match, it uses this plan.Otherwise,the optimizer will evaluate all the accepted plans in the SQL baseline and choose the lowest cost plan for executing the statement.

SPM involves three distinct steps or stages—capturing,selecting,and evolving SQL plan base lines.The following sections summarize the three phases.

Capturing SQL Plan Baselines

The first phase involves capturing the SQL plan baselines.To support SPM and the performance verification of newly evolved execution plans,the optimizer maintains a plan history for each repeatable SQL statement containing the different execution plans generated for that statement. The optimizer considers a SQL statement repeatable if the database parses or executes that statement multiple times after the optimizer logs the statement in the SQL statement log that it maintains.

You can load the SQL plan outlines automatically or use manual bulk-loading procedures.First we’ll discuss the automatic approach to creating and maintaining plan history for SQL statements,using the optimizer-provided data.Then we’ll show how to do this manually.

Automatic SQL Plan Capture

You can enable automatic loading of SQL plan baselines by setting the initialization parameter optimizer_capture_plan_baselines to true,as shown here:

 SQL> alter system set optimizer_capture_sql_plan_baselines=true; System altered.

The default value of the optimizer_capture_sql_plan_baselines parameter is false,meaning automatic plan capture is turned off by default.By setting the parameter’s value to true in the parameter file,or with the alter system or alter session statement, you let the database auto maticallystart recognizing repeatable SQL statements and create a plan history for them.

The database uses the optimizer details about each SQL statement to reproduce the execution plan for it.The first plan for each SQL statement is marked for use by the optimizer. At this point,both plan history and the SQL plan baseline are identical, since there’s only a single SQL statement to consider.All new plans that the optimizer generates subsequently for a SQL statement become part of the plan history. During the final SQL plan baseline evolution phase, the database will add to the SQL plan baseline any plan that’s verified not to lead to a performance regression.

Manual SQL Plan Loading

In addition to this automatic tracking of repeatable SQL statements,you can also manually seed plans for a set of SQL statements (SQL tuning set). You can supply the SQL plans as an alternative or in addition to the automatic capture of SQL by the database. An important distinction between automatically capturing SQL plans and manually loading SQL plans is that the database adds all manually loaded plans to a SQL plan baseline as accepted plans, without subjecting them to verification for performance.

You can use the new DBMS_SPS package to load SQL plans directly into a SQL plan baseline.You can use a cursor cache as the source of the SQL plans or load the plans from SQL tuning sets and AWR snapshots. In the following example, we show how to load SQL plans manually from an STS.Make sure you create and load the STS first, before executing the load_plans_from_sqlset function, as shown here:

 SQL> declare  2 result number;  3 begin  4 result :=dbms_spm.load_plans_from_sqlset (sqlset_name =>  'sqlset1');  5 end;  6 /  PL/SQL procedure successfully completed. SQL>

The previous function loads all SQL plans from an STS into SQL plan  baselines.The plans aren’t verified,as we mentioned earlier,and are added as accepted plans to an existing or new SQL baseline. If you want to load plans from the AWR, first load the SQL plans from the AWR into the STS before executing the load_plans_from_sqlset procedure.You can alternatively load any SQL plans you want from the cursor cache using the load_plans_from_cursor_cache function, as shown here:

 SQL> declare  2 result number;  3 begin  4 result := dbms_spm.load_plans_from_cursor_cache (sql_id =>  '7ztv2z24kw0s0');  5 end;  6 /  PL/SQL procedure successfully completed. SQL>

this example,we load a plan from the cursor cache by providing its SQL ID. Instead of the SQL ID,you can also use the SQL statement text or the SQL handle to load a SQL statement into a SQL plan baseline.

SQL Plan Baseline Selection

During the selection phase, the database automatically selects SQL plans for every SQL statement that avoids a performance regression. The cost optimizer adopts a conservative plan selection strategy that works in this way:each time the database compiles a SQL statement, the optimizer will build a best-cost plan and try to match it with a plan in the SQL baseline and then use that plan if there is a match. If the optimizer doesn’t find a match in the SQL baseline,it selects the cheapest plan from the SQL baseline for execution. In this case, the optimizer’s best-cost plan (which doesn’t match any of the plans in the plan history) is added as a new plan to the baseline. The database doesn’t use this new plan automatically since it is as yet a non accepted plan.The database can use this plan only after it verifies that it doesn’t lead to performance regression or if it’s the only available plan.

Evolving SQL Plan Baselines

A SQL plan baseline can’t remain static over time; it has to evolve with the changes in your system by analyzing the good and the not so good plans over a period of time. The database doesn’t automatically implement all SQL plans generated by the cost-based optimizer.A SQL plan is deemed an acceptable plan only after verification of its performance as compared to the SQL plan baseline performance ,and the SQL plan baseline consists of all the acceptable plans for a specific SQL statement Evolving a SQL plan means changing a nonaccepted plan into an accepted plan.

The original plan baseline will always consists of at least one plan, which,of course,will be the first execution plan generated by the optimizer for a given SQL statement. The database continuously evaluates new nonaccepted plan performance by comparing the plan performance with a plan selected from the SQL plan baseline.All the plans that don’t lead to a performance regression are changed into accepted plans and integrated into the SQL plan baseline.

There are three ways to evolve a SQL plan baseline—manually by running the SQL Tuning Advisor task,by using the DBMS_SPM package, and by using the SQL Tuning Advisor.We’ll review the three methods briefly in the following sections.

Manually Loading Plans into the Baselines

As described earlier,when you manually load SQL baselines by using SQL tuning sets or the cursor cache as the source for the statements,the plans you load are considered accepted plans,which makes them equivalent to a verified plan.

Using the DBMS_SPM Package to Evolve Plans

The function DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE evolves base lines associated with one or more SQL statements. The function compares the execution performance of a nonaccepted plan, and if it finds the performance better than the SQL plan baseline performance, it changes the nonaccepted plan into an accepted plan. Here’s an example:

 SQL> declare  2 report clob;  3 begin  4 report := dbms_spm.evolve_sql_plan_baseline(  5 sql_handle => 'aq8yqxyyb40nn');  6 end; 7 /

You must grant a user the administer_sql_management object privilege so they can execute the evolve_sql_plan baseline procedure.

Evolving Plans with the SQL Tuning Advisor

When you run the SQL Tuning Advisor,it may sometimes recommend the acceptance of a SQL profile when the advisor runs across a plan whose performance is better than any plan in the SQL plan baseline. If you implement the recommendation by accepting the SQL profile,the SQL Tuning Advisor automatically adds that plan to the SQL plan baseline. In Oracle Database 11g,as we explained in the SQL Tuning Advisor discussion,the Automatic SQL Tuning Advisor task that runs nightly during the maintenance window automatically implements the SQL profile recommendations made by the SQL Tuning Advisor.Thus,any tuned high-load SQL statements automatically become part of the SQL plan baselines for those statements.

Displaying SQL Plan Baselines

You can examine the SQL plan baselines for any SQL statement using the DISPLAY_SQL_PLAN_BASELINE function of the DBMS_XPLAN package,as shown here:

 SQL> select * from table(  2 dbms_xplan.display_sql_plan_baseline(  3 sql_handle=>’SYS_SQL_2d02582d7a04c30b’,  4 format =>’basic’;));  PLAN_TABLE_OUTPUT  ----------------------------------------------------------  SQL handle: SYS_SQL_2d02582d7a04c30b  SQL text: select * from hr.employees order by last_name  -----------------------------------------------------------  ------------------------------------------------------------  Plan name: SYS_SQL_PLAN_7a04c30b6d8b6a03  Enabled: YES   Fixed: NO Accepted: YES Origin: MANUAL-LOAD  -------------------------------------------------------------  PLAN_TABLE_OUTPUT  --------------------------------------------------------------  Plan hash value: 3447538987  -------------------------------------  | Id |   Operation      |    Name   |  -------------------------------------  | 0 | SELECT STATEMENT   |          |  | 1 | SORT ORDER BY      |          |  | 2 | TABLE ACCESS FULL  | EMPLOYEES|  --------------------------------------  20 rows selected. SQL>

As shown here,the function DISPLAY_SQL_PLAN_BASELINE displays and provides information about the baseline plans.

Managing the SQL Management Base

The optimizer is in charge of automatically managing the SQL management base (SMB),which is part of the data dictionary and stores the statement log maintained by the optimizer. The SMB also stores the SQL plan histories and the SQL plan baselines.Unlike the data dictionary,which is stored in the system tablespace, the database stores the SMB in the sysaux tablespace. You can configure the SMB to occupy anywhere between 1 and 50 percent of the size of the sysaux tablespace;the default allocation is 10 percent. A back ground process that runs once a week will issue an alert if the SMB exceeds its space allocation. You can increase the space allocation percentage,increase the size of the sysaux table space, or purge SQL plan baselines and SQL profiles from the SMB to make more room for new data.

Use the new view DBA_SQL_MANAGEMENT_CONFIG to see the configuration parameters of the SQL management base.There are two configuration parameters you can modify,space_budget_percent and plan_retention_weeks. The space_budget_percent parameter shows the current SMB space limit in the sysaux tablespace, and the plan_retention_weeks parameter shows how long the database will retain the data in the SMB before automatically purging it.You can view the current values for both of these parameters by issuing the following statement:

 SQL>select parameter,parameter_value from dba_sql_management_config;  PARAMETER                            PARAM_VALUE  ---------------------------      -------------------  SPACE_BUDGET_PERCENT                     10  PLAN_RETENTION_WEEKS                     53 SQL>

This query shows the default values for the two configuration parameters related to the SQL management base.The database will retain unused plans for 53 weeks before purging them.You can also purge individual plans with the help of the purge_sql_plan_baseline function.You can change either of the two configuration parameters by using the configure procedure of the new DBMS_SPM package.The following example shows how to change the setting for the space_budget_percent parameter:

SQL> execute dbms_spm.configure('space_budget_percent', 30);

A scheduled purging job that runs as an automated task in the maintenance window takes care of purging unused plans.The purging job automatically re moves any execution plan that the optimizer hasn’t used in more than a year.

You can,however,change this default purging behavior and specify that a used plan not be purged for a period as long as 10 years,as shown in the following example (520 weeks is 10 years):

 SQL> execute dbms_spm.configure ('plan_retention_weeks', 520);

The execution of the configure procedure in the example ensures that all execution plans in the SQL management base are retained for 10 years.

Managing SQL Plan Baselines

You can use the DBA_SQL_PLAN_BASELINES view to see the current SQL plan baselines for SQL statements.Besides various SQL plan and SQL identifiers, execution statistics,and timestamps the view contains the following key columns:

  • origin: Tells you how the SQL plans were created.There are four possible values for this column:
  • • manual-load
    • manual-sqltune
    • auto-capture
  • auto-sqltune
  • enabled: Shows whether the plan baseline is enabled.
  • accepted: Shows whether the plan baseline is accepted.
  • fixed: Shows whether the plan baseline is fixed.

Here’s an example showing how you can get details about plan baselines from the DBA_SQL_PLAN _BASELINES view:

 SQL> select sql_handle,origin,enabled,accepted from  2 dba_sql_plan_baselines;  SQL_HANDLE                   ORIGIN      ENABLED     ACCEPTED  ------------------------ -------------  ----------  ------------  SYS_SQL_2d02582d7a04c30b   MANUAL-LOAD      YES         YES  SYS_SQL_6a74b7e2315d3858   AUTO-CAPTURE     YES         YES  SYS_SQL_827bc97a47b3e31d   AUTO-CAPTURE     YES         YES  SYS_SQL_9ce6569c74a9b77f    MANUAL-LOAD     YES         YES  SYS_SQL_df1c094103fb0616   AUTO-CAPTURE     YES         YES SQL>

The query shows whether the plan was captured automatically or manually,as well as whether it is accepted and enabled.

You can alter any of the attributes of a SQL plan baseline by using the DBMS_SPM. ALTER_SQL_ PLAN_BASELINE function. You can also import and export SQL plan baselines using Oracle Data Pump and procedures from the DBMS_SPM package.

ADDM Enhancements

In Oracle Database 10g,Oracle introduced the Automatic Database Diagnostic Monitor,which uses the automatic workload repository snapshots to provide diagnosis regarding SQL statements, IO, and CPU usage in the database.

The ADDM is based on the single-minded goal to reduce “DB time. ”ADDM provides you with quick results that help pinpoint the root cause of a performance problem,besides telling you what the nonproblem areas are,so you don’t waste your time chasing down unproductive avenues.

Instance ADDM and Database ADDM

You can run the ADDM in two distinct modes in Oracle Database 11g.In the first mode,called the instance ADDM, the monitor provides traditional instance-level performance evaluation.

The instance ADDM is what you have in Oracle Database 10g.Oracle calls the new ADDM mode introduced in the new release database ADDM, and its goal is to analyze and report on Oracle RAC installations.

In this mode,the ADDM report will contain an analysis of the cluster as a unit besides an analysis of each of the instances that are part of the the cluster.Oracle’s target for the database ADDM is database administrators rather than application developers,for whom the instance ADDM is more relevant.

The instance ADDM runs automatically after the database takes each AWR snapshot,and it produces the same reports that it did in the Oracle Database 10g database.Depending on whether you run the ADDM in global or instance mode, you’ll have global findings or local findings.

The global ADDM is appropriate only for an ORAC setup,where the tool can analyze the performance of the entire cluster together as one unit.For example,the DB time is computed as the sum of the database times for all instances in the ORAC cluster.

Global ADDM also reports facts such as CPU consumption and server performance at the aggregate cluster level,thus giving you an overall view of how the ORAC installation is performing.You can also perform a partial analysis of just a subset of the instances in a cluster.

This is known as the partial analysis ADDM.If you are concerned about the performance of only some but not all instances in a cluster, partial analysis ADDM will provide results faster than an exhaustive database ADDM analysis.

Running ADDM Over a Specific Time Period

You can now run an ADDM task for any specific range of time you want, instead of being able to run it only over a pair of snapshots.This is a great help,especially when there is a hung database or a sharp spike in activity in the database that hasn’t been captured yet by an AWR snapshot.

Specifying the Type of ADDM Task

You can use the DBMS_ADVISOR package’s set_default_task_parameter procedure to specify the type of ADDM analysis you want to run.You set the values for the instance and the instance’s parameters to select among a database ADDM,instance ADDM,or partial analysis ADDM.

Advisor Findings

The various advisors that are part of the management advisory framework issue findings.Oracle Database 11g names and classifies the advisor findings, making it easier for you to keep track of the frequency with which various findings occur. There is a new view named DBA_ADVISOR_FINDINGS that lets you look up all finding names. Here’s a query that lets you view all findings in the database:

 SQL> select finding_name from dba_advisor_finding_names;  FINDING_NAME  -----------------------------------  normal, successful completion"Administrative" Wait Class  ...  Buffer Busy  ...  Undersized instance memory  80 rows selected. SQL>

There is also a new FINDING_NAME column added to the DBA_ADVISOR_FINDINGS view,which lets you make queries such as the following:

 SQL> select finding_name,impact from dba_advisor_findings;  FINDING_NAME                          IMPACT  ----------------------------------- -------------  normal, successful completion           0  normal, successful                      0  I/O Throughput                         34077700  Undersized instance memory            140000000  Hard Parse                            256895416  ...  78 rows selected. SQL>

Classification of findings applies to all types of advisors,not just the ADDM,and helps you find the frequency with which different findings are issued.

Using the New DBMS_ADDM Package

The DBMS_ADDM package is a new package in the Oracle Database 11g Release 1, designed to make it easier to manage ADDM. Here’s a brief summary of the main procedures and functions of this package:

 analyze_db: creates a Database ADDM (global) task.
SQL> exec dbms_addm.analyze_db('Database ADDM Task 1', 1,2);
analyze_inst: creates an Instance ADDM (local) task.
SQL> exec dbms_addm.analyze_inst('Instance_ADDM_Task 1'1,2);
analyze_partial: creates a task for analyzing some of the instances only.
SQL> exec dbms_addm.analyze_partial(('Instance_ADDM_Task 1'1,2);
get_report: produces a text report of the ADDM task results
SQL> select dbms_addm.get_report('partial_analysis_report') from dual;
Delete: deletes an ADDM task of any type
SQL> exec dbms_addm.delete('Instance_ADDM_Task 1');

Note that you must pass the beginning and ending snapshot IDs when creating any type of an ADDM task. Here’s an example of an instance ADDM execution with a pair of snapshots.

 SQL> var task_name varchar2(100);  SQL> begin  SQL>:task_name := 'Instance ADDM Task 1';  SQL>dbms_addm.analyze_inst (:task_name, 10, 15); SQL> end;

To get a text report of the previous task,run the get_report procedure, as shown here:

 set long 100000  set pagesize 50000 SQL> select dbms_addm.get_report(:tname) from dual;

The DBMS_ADDM package lets you add various directives when you create a new ADDM task to help you control the work of the ADDM. Here’s a description of the various procedures you can use to create directives to limit the work of the ADDM:

  • insert_finding_directive: Creates a directive that limits the reporting of a specific finding type
  • insert_sql_directive: Creates a directive to limit the reporting on a specific SQL
  • insert_segment_directive: Creates a directive to prevent the
    Automatic Segment Advisor from running on certain segments
  • insert_parameter_directive: Creates a directive to prevent the
    modification of values for a specified system parameter

Here’s an example that shows how to create an ADDM directive to filter out findings relating to undersized instance memory:

 SQL> var task_name varchar2(100);  SQL> begin  dbms_addm.insert_finding_direcive (null,  'Undersized Instance Memory Directive',  'Undersize instance memory',  5,  10);  :task_name := 'ADDM Task 1';  dbms_addm_analyze_inst(:task_name,1,2);  end; SQL> /

The example shown here creates an instance ADDM task with a finding directive that limits the display of the “Undersized instance memory” finding only if that finding is related to at least five active sessions during the time of the ADDM analysis.

In addition the finding must be responsible for at least 10 percent of the database time during the analysis period.You can delete any directive you create with the DELETE_* procedures.For example,the delete_finding_directive procedure will delete a finding directive.

AWR Baselines

The AWR generates automatic snapshots of performance data on an hourly basis.You can also manually generate snapshots with the
DBMS_WORKLOAD_REPOSITORY package.The ADDM then uses the data thus collected in the AWR.

In Oracle Database 10g, you learned how to use AWR baselines, which provide a frame of reference for comparing performance and advisor reports between two periods.

An AWR baseline is simply database performance data from a set of AWR snapshots,representing database performance at an optimum level.When a performance problem occurs,the database compares performance data from a previous “good” period with the performance of a similar workload.

You create a baseline with the create_baseline procedure of the DBMS_ WORKLOAD_REPOSITORY package and drop it by executing the drop_baseline procedure.

You could create and delete only simple fixed baselines in Oracle Database 10g. A fixed baseline uses a fixed period that you specify. Oracle Database 11g adds more sophistication to the management of AWR baselines by providing the capability to create more complex and productive base lines.Oracle Database 11g contains the following enhancements relating to AWR baselines:

  • Moving window baselines
  • Baseline templates
  • Ability to rename baselines
  • Ability to set expiration dates for baselines

You can still use the AUTOMATIC_WORKLOAD_REPOSITORY package’s procedures create_snapshot, drop_snapshot, modify_snapshot settings, create_baseline, and drop_baseline to manage snapshots and baselines.

In Oracle Database 11g,you have access to new procedures that let you manage the added functionality of the AWR baselines.We’ll briefly describe the AWR baseline enhancements in the following sections.

Moving Window Baselines

A moving window baseline doesn’t require you to specify a pair of snapshots.The range of the baseline keeps “moving” or changing continuously.By default, Oracle automatically creates a system-defined moving window named system_moving_window, whose time span correspond. to the entire AWR retention period, which is eight days by default now (instead of seven days).

The default moving window baseline will always use AWR data from the previous eight days. Here’s how you can view the moving window baseline information.

 SQL> select baseline_name,baseline_type,start_snap_time,end_snap_time  from dba_hist_baseline;  BASELINE          BASELINE_TYP       START_SNAP     END_SNAP  ---------------  --------------- ----------------- --------------  SYSTEM_ MOVING     MOVING_WINDOW       05-JUN-07      13-JUN-07  WINDOW SQL>

Note that the eight-day span for system_moving_window is a default value, which you can modify.However,the time range for this window must be always the same as the AWR retention period.

Therefore, if you want to specify a 30-day time span for the system_moving_window, you must first raise the AWR retention period to 30 days, up from its default of eight days. (In Oracle Database 10g, the default was seven days.) Use the new modify_baseline_window_size procedure of the DBMS_WORKLOAD_REPOSITORY package to modify the size of the default moving window:

  1. You must change the retention period of the AWR,since your default moving window size has to correspond to the AWR retention period,whose default value is eight days.Here’s how you change the retention period to 15 days (21600 minutes):
  2.  SQL> execute dbms_workload_repository.modify_snapshot_settings  (retention => 21600);  PL/SQL procedure successfully completed.  SQL>
    You can set the size of the retention period to anywhere from one day to ten years.
  3. Once you set the new AWR retention period, you can modify the default moving window size as shown here:
  4.  SQL> execute dbms_workload_repository.modify_baseline_window_size (window_  size = 15,dbid => 3863017760);  PL/SQL procedure successfully completed. SQL>

The window_size parameter sets a new window size for the default moving window baseline. In this case,we set the new window size to 15 days, which is greater than the default AWR retention period of eight days.You can set only a window size that is equal to or less than the AWR retention setting.In cases such as this, where you want to specify a window size greater than the default AWR retention setting,you must first execute the modify_snapshot_settings procedure to change the AWR retention setting.

Baseline Templates

Oracle Database 11g provides the ability to create baseline templates,which enable you to create AWR baselines automatically for a specific period, whether that period is in the past or in the future.

The big advantage to using a baseline template to create baselines is that you don’t need to supply the starting and ending snapshot identifiers for the create_baseline procedure.Oracle Database 11g provides you with two types of AWR baselines: single and repeating.

A single baseline is static in nature.A repeating baseline is dynamic and changing since it captures performance over changing periods.Oracle also provides you with baseline templates to create both types of AWR baselines:

  • You can use a single baseline template to create a baseline for a single period that must be contiguous, such as one that covers the testing of a new application or an upgrade.
  • You use a repeating baseline template to create a baseline on a repeating schedule,such as when you want to capture the performance during the execution of a key batch job over a period.You can specify that the database automatically remove older base lines after a certain time

Generating a Template for a One-Time Baseline

The following syntax enables you to specify a template for the creation of a single AER baseline in some future period:

 dbms_workload_repository.create_baseline_template(  start_time in date,  end_time in date,  baseline_name in varchar2,  template_name in varchar2,  expiration in number, dbid in number default null);

Note that you can use the expiration attribute to set the time (in days) for which Oracle must maintain a baseline.The default value is null, meaning the database will never drop a baseline.

Generating a Template for a Repeating Baseline

You can specify a template for creating a repeating baseline using the following procedure:

 dbms_workload_repository.create_baseline_template(  day_of_week IN VARCHAR2,  hour_in_day IN NUMBER,  duration IN NUMBER,  start_time IN DATE,  end_time IN DATE,  baseline_name_prefix IN VARCHAR2,  template_name IN VARCHAR2,  expiration IN NUMBER, dbid IN NUMBER DEFAULT NULL);

You can use the drop_baseline_template procedure to drop either a one-time or a repeating baseline template.

Ability to Rename Baselines

You can use the new rename_baseline procedure to rename an AWR baseline, as shown in the following example:

 SQL> execute dbms_workload_repository.rename_baseline –  (old_baseline_name =>'old_baseline', - new_baseline_name => 'new_baseline', dbid=3863017760);

The dbid parameter, as in the earlier example,is optional,and the database identifier for the local database will be used as the default value for dbid if you don’t provide one.

Ability to Set Expiration Dates for Baselines

When you use the create_baseline procedure to create a new AWR baseline,you can specify the expiration time for that baseline by setting a value for the new expiration attribute of the procedure.

The default value of null for the expiration attribute means that a baseline will never expire, which was the only behavior possible in Oracle Database 10g.

Baselines and Template Dictionary Views

There are several new DBA views to help you manage AWR baselines and templates.The three views that will help you the most are as follows:

  • DBA_HIST_BASELINE (modified)

Adaptive Baseline Metric Thresholds

The Oracle database uses performance alerts to warn you about any violations of thresholds for various activities in the database. However, specifying when the database should issue an alert is a complex task. The setting of alert thresholds is a difficult task because the expected metric values can vary based on the instance load and workload type.

In the Oracle Database 11g release,the database uses the AWR baseline metric statistics to determine alert thresholds.Once the database computes baseline statistics for a specified base line, you can align the metric thresholds with these AWR baselines. These thresholds are termed AWR baseline metric thresholds. Using AWR baselines as the source for the metric statistics lets you specify adaptive thresholds for database performance. The new metric thresholds are called adaptive since they’re based on the system_ moving_window baseline, which is by definition constantly changing.

You can define AWR baseline alert thresholds either through the PL/SQL interface or through Database Control. The following steps show you how to use Database Control to configure adaptive thresholds:

  1. Click the Server tab on the Database home page.
  2. Click the AWR Baselines link in the Statistics Management section.
  3. You’ll be on the AWR Baselines page. Click the Baseline Metric Thresholds link.
  4. Select the metric type you want, and click Edit Thresholds.
  5. On the Edit Thresholds page,specify critical and warning thresholds for the metric in the Thresholds Settings section.
  6. Click Apply Thresholds to apply the threshold settings.

You must specify three threshold settings for each metric for which you choose to configure adaptive thresholds:

  • Threshold Type: Choose from Significance Level, which indicates whether a metric is at or above a specified value (for example in 100); Percentage of Maximum; and Fixed Values.
  • Warning: Warning threshold.
  • Critical: Critical threshold.

Enterprise Manager provides a starter set of thresholds based on online transaction processing (OLTP) or data warehouse workload profiles. Once you select the profile,the database automatically configures adaptive thresholds for the appropriate group of metrics for the workload you choose by using, metrics data from the system_moving_window baseline.

Performance-Related Changes in Database Control

There are numerous changes in Database Control pertaining to database performance. Since Oracle Database 11g has introduced several new performance features, Database Control will,of course,support those new features.Our focus is on the main changes to the Performance page in Database Control.

Customized Performance Page

You can now set up a customized Database Performance page in Database Control to suit your purposes.Database Control stores the customized settings in its repository and retrieves them for your session’s use when you log in again. You can customize things such as which charts appear on the Performance page in the Active Sessions section.You can also configure whether baseline values should appear in the Throughput and Services charts.

Click the Settings link to go to the Performance Page Settings page.The Performance Page Settings page has two sections. The Detailed Chart Settings section lets you choose defaults for displaying the instance activity charts.The Baseline Display Settings section lets you select if and how the AWR baseline values are displayed in the Performance page charts.

The Detailed Chart Settings section on the Performance Page Settings page lets you make the following display choices for the Performance page:

  • Default View:You can select from Throughput,I/O,Parallel Execution, and Services.
  • Through put Chart Settings:You can choose Per Second or Per Transaction.
  • I/O Chart Settings:You can choose from I/O Function, I/O Type, and Consumer Group.

Here are the steps to follow to change the display of the instance activity charts:

  1. Click Settings on the Performance page.
  2. Choose the default view you want to appear in the Average Active Session section by making a choice in the default view.You can choose among Through put,I/O, Parallel Execution,and Services.
  3. In Throughput Chart Settings, select Per Second or Per Transaction.
  4. Under I/O Chart Settings, select from I/O Function, I/O Type, or Consumer Group as the default I/O breakdown on the I/O tab.
  5. Click OK.

The Performance Page Settings page that appears now will reflect the new display choices you made,as shown in Figure

The Performance Page Settings page

Average Active Sessions

The average active sessions section on the Performance page offers a window into the performance of the database by showing, in graphical form, what proportion of time users are waiting and the time they spend actually doing something, such as IO.Monitoring user activity and probing further when there are spikes in CPU usage, for example, helps you identify the causes of poor performance.

ADDM Performance Analysis

On the Database Control Database home page, you can now see an ADDM performance analysis summary table, which provides you a quick overview of current ADDM findings.This is the same table as the one you’d get if you clicked ADDM Findings in the Diagnostic Summary section on the Database home page. Figure shows the ADDM Performance Analysis table.

The ADDM Performance Analysis table