Oracle Database 11g provides several enhancements in the optimizer statistics collection area.One of the important new features is the extended statistics feature,which lets the optimizer collect statistics on correlated columns in atable,thus providing better selectivity estimates. You can also collectexpression statistics on expression functions.
The statistics that you can collect for column groups and expressions are called extended stati stics.The other important enhancement is the newconcept of private and public statistics,with private statistics being statistics that you’ve collected but not yet published for use by the cost optimizer.We’ll cover the interesting new features in the following sections.
One of the key things the cost optimizer considers during its execution plan evaluation is the selectivity (or cardinality) of the column in the where clause of a query.The number of different values in a column determines the column’s selectiv ity.When a column in a table has many different values,that column has good selectivity, and a column with many values that are the same has poor selectivity. If you build an index on a column with good selectivity, the optimizer is able to access the necessary row faster than it will access a row with lower selectivity.
If a column has poor selectivity—for example,a certain value that’s the same in more than 95 percent of the table’s rows—it’s likely that the cost optimizer will opt for a full table scan,regardless of the existence of an index on that column.If,on the other hand,there are several distinct column values such that no column value is in more than 1 or 2 percent of the table’s rows,the optimizer is likely to choose the index for accessing the rows when you issue a query that contains a where clause containing this column.
Several queries,however,contain multiple columns from a table,each with a different degree of selectivity.Some of these columns are related,but the optimizer doesn’t know about the nature of these relationships.For example,in a table containing products and their prices,some products have higher prices than others.However,the cost optimizer, which bases itscalculations on the statistics it gathers on individual columns,doesn’t know of this relation ship.The optimizer can’t estimate the combined selectivi ty of multiple columns when there is skewed data—that is,data where the differ ent values in a column aren’t evenly distributed among all rows but rather are bunched together For the optimizer to estimate the true cardinality of the data, it must know whether the addition of another column to a given column would reduce the result set. Correlated statistics on multiple columns could provide significantly better cardinality estimates than single column statis tics or histograms.Adding an additional predicate could reduce the result set when the two columns are strongly correlated.
Oracle Database 11g introduces extended statistics (also called multicolumn statistics) wherein you can collect statistics on a set of columns together, thus enabling the optimizer to compute the selectivity of multiple single-column predicates accurately.Since closely related columns can affect the combined selectivity for the columns taken as a group,gathering statistics on the related columns as a group(column group) means the optimizer will have a more correct estimate of the selectivity for the group of columns as a whole,which is your real focus in queries involving predicates using the related columns.This innovation could mean that a query that used a full table scan in prior releases may now use an index scan, making the query run much faster.
The following example makes the concept of multicolumn statistics clearer. Let’s take the custo mers table in the sh schema,which has two relatedcolumns,cust_state_province and country_id.You get the following results when you query the customers table:
If you add the country_id column to the query, the results are still the same:
The COUNTRY_ID 52790 is for the United States. If the COUNTRY_ID column has a different value, such as 52775, the results are quite different:
The optimizer doesn’t have any idea about the relationship between the CUST_TATE_PROVINCE and COUNTRY_ID columns.However,if you gatherstatistics on the two columns as a group,the optimizer has a far betterestimate of the selectivity for the group,instead of relying on generating the selectivity estimate for the two columns separately from the statistics on the individual columns.
Creating Multicolumn Statistics
Although the database creates column groups by default by analyzing theworkload in the data base,you can create your own column groups by using the DBMS_STATS package.Here’s the syntax of the create_extended_stats function, which lets you create a column group:
The ownname argument stands for the schema owner’s name,and if you’re using the current schema, you can specify null as the value for thisattribute.The tabname attribute is for the table name,and the extension attribute is where you list the related columns that are going to be part of your new column group.
The following example shows how to create a column group named group1 consisting of two related columns—CUST_STATE_PROVINCE and COUNTRY_ID —from the table orders:
Executing this PL/SQL procedure creates a new column group group1 consist ing of the STATE_PROVINCE and COUNTRY_ID columns and adds that group to the customers table.You can check what extensions providing multicolumn statistics exist in the database by issuing the following query:
The show_extended_stats function returns the name of the column group you created as a virtual column,with a database generated column name.Here’s the query:
Utilizing the virtual column SYS_STU#S#WF25Z#QAHIHE#MOFFMM, which represents the true nature of the correlation between theCUST_STATE_PROVINCE and the COUNTRY_ID columns,the cost optimizer now has the correct estimate of the true selectivity of the two correlated columns.A virtual column,as its name suggests,isn’t physically part of the table,but the optimizer is aware of it when it’s evaluating different execution plans.
Gathering Statistics for Column Groups
When you know that some columns of a table are correlated in a skewed manner,you can let the optimizer know that the data is skewed so it can compute better selectivity estimates.The way to do this is by gathering histograms on the skewed column group,as shown in the following example:
SQL> exec sys.dbms_stats.gather_table_stats(null,'customers',-method_opt => 'for all columns size skewonly')
Earlier you learned how to create a new column group using thecreate_extended_stats procedure. You can automatically create a new column group as part of statistics gathering as well,as shown here:
You specify the new column group you want to create by using the forcolumns clause.The procedure will both create the new column group as well as gather statistics on that group in a single step.
Deleting Extended Statistics
You can remove any extended statistics you created by executing thedrop_ extended_statistics procedure, as shown in the following example:
You can use the drop_extended_statistics procedure for removing any extended statistics that you created for a function on a column.
Expression Statistics for Functions and Expressions
Another area where the cost optimizer has trouble making true selectivity estimates is for columns to which you apply a function or an expression.For example,it’s common to use the upper and lower functions when usingcolumns such as LAST_NAME and FIRST_NAME in a query expression,as shown in this example.
Because of the application of a function to the state column,the optimizer can’t get accurate estimates of the selectivity of that column.In Oracle Database 11g, you let the optimizer get better selectivity estimates by gathering expression statistics on the expression function.
Suppose that the users input a value for the last name of the customers in uppercase.You can then create extended statistics on the expression UPPER (last_name) so the optimizer can get the correct count of the number of rows:
As with the column groups,you can both create new extension statistics and gather statistics at the same time by using the DBMS_STATS package, as shown in this example:
You can monitor expression statistics with the DBA_STAT_EXTENSION view,and you can drop the extension statistics from a table with the drop_extended_statistics function of the DBMS_STATS package.
Changing Statistics Preferences
In Oracle Database 10g, you used the DBMS_STATS package’s GET_PARAM function to return the default values of parameters for several procedures that belong to the DBMS_STATS package. The GET_PARAM procedure is obsolete in Oracle Database 11g. Instead, there is a more powerful new function called GET_PREFS, which will get you the default values of various preferences.
The GET_PREFS function has parameters such as cascade, degree,estimate_ percent, granularity, publish, and stale_percent, each of which is called a preference.Here’s an example of how you can use theget_prefsfunction to get the preferences for a certain parameter, in this case the stale_percent parameter:
If you provide the schema, the table name,and a preference, the previous function will return the values specified for that table.If you don’t specify any values for that table,the function will return the global preference,if you specified one.If you don’t specify either a global preference or a table specific preference, the function will return the default values for that preference.
The set_param procedure that enabled you to set default parameter values for various DBMS_STAT procedures in the previous version of the Oracle data base is now obsolete.Instead, there is a new procedure,set_global_prefs, that lets you set global statistics preferences for the various parameters.You need the sysdba privilege to set global preferences using theset_global_prefs procedure.
The following example shows how to use the set_global_prefs procedure:
Confirm the change in the estimate_percent parameter’s value in the table product_descriptions by using the following query:
The set_global_prefs procedure will set global preferences for all parameters you can set for the various procedures of the DBMS_STATS package.You can also set the preferences at the database level (set_database_prefs), schema level(set_schema_prefs), and individual table level (set_table_prefs).
Removing Statistics Preferences
You can use new procedures of the DBMS_STATS package to delete current statistics preferences.Here are the different statistics preferences deletion procedures:
Exporting and Importing Statistics Preferences
You can also export statistics preferences at the database (export_data base_prefs), scheme(export_schema_prefs),or table (export_table_prefs) level. The following example exports statistics preferences at the database level:SQL> exec dbms_stats.export_database_prefs('hrtab',statown=>'hr');
The previous procedure will export the statistics preferences of all tables (except those owned by SYS) to the table hrtab. Similarly, you can import statistics preferences from a specified statistics table where you first exported them at the database level (import_database_prefs), schema level (import_schema_prefs), or table level (import_table_prefs).
Keeping Statistics Pending vs.Publishing Statistics Immediately
In prior database versions, once the DBMS_STATS procedure (or the analyze command) collecte statistics,the statistics were immediately offered for use by the cost optimizer.In other words,the publishing of the statistics was automatic.This is still the default behavior in Oracle Data base 11g,with the important distinction that you can now opt not to publish the statistics automa tically,thus giving you an opportunity to test the new statistics to see whether they actually improve query performance.You can view the default behavior regarding the publishing of statistics by using the get_prefs function as shown here:
In Oracle Database 11g, there is a distinction between public and private statistics, the former being statistics that have been published for use by the cost-based optimizer in its query optimization process and the latter being statistics that have been gathered but aren’t yet published, that is, not made available to the cost-based optimizer.You can choose to collect private statistics at the database level or at the table level, thus offering you great flexibility in how you use this powerful new feature.By default, statistics that the database collects are stored in the data dictionary.The database will store private statistics that you collect in a private area pending their publishing, thus making them unavailable to the cost-based optimizer, which can use only those statistics stored in the data dictionary.
How do you test the usage of private statistics by the cost-optimizer? The statistics you categorize as private have a pending status at first, before they’re published.This is the reason why the private statistics are also called pending statistics.After verifying the query performance,you can opt to publish the pendingstatistics by running the publish_pending_stats procedure in the DBMS_STATS package.If the query performance isn’t acceptable, you can delete the pending statistics by running the delete_pending_stats procedure instead.
Making Pending Statistics Available to the Optimizer
Once you test the impact of the new statistics, you can decide whether to publish them.You can make the private statistics available to the cost-based optimizer by setting the initialization parameter optimizer_use_private_statistics to true.The default value of this parameter is false meaning the cost optimizer ignores pending statistics by default.Once you change the parameter setting to true,the optimizer will take into account the pending statistics in preference to the published statistics it has for a table or schema.
In the following example, we use the set_table_prefs procedure we explained earlier in this chapter in order to specify that a certain table’s statistics not be published automatically.In other words,we are directing the database to collect private instead of public statistics for this table,and these statis tics will be in pending mode until you decide on their acceptance.In the example,we turn off automatic publishing of statistics for the table product_descriptions,owned by the user oe. Here’s the example:
You can also change the publish settings at the schema level instead of the table level.The set_table_prefs procedure we execute here doesn’t set off an immediate statistics collection, private or public, for the specified table.It stipulates only that when the database collects statistics the next time for the product_descriptions table,it shouldn’t immediately publish thestatistics.Instead of publishing the statistics that it gathered for the table. product_descriptions, the database will store them in theDBA_TAB_PENDING_STATS table.
You can confirm that the product_descriptions statistics will be private (pending) byissuing the following select statement again:
The value of false in the previous query means that the statistics for the OE table won’t be automatically published but kept private (pending).
Collecting Private Statistics
First,execute the delete_table_statistics procedure to delete any existing statistics for the table product_descriptions.Here’s the code:
Once you get rid of all the statistics on the table product_descriptions, collect fresh statistics for the table by issuing the following command:
The database won’t publish the statistics you collected by executing the previous gather_ table_stats procedure,since you earlier set the publish pre ference to false.You can verify first that these statistics aren’t public by querying the USER_TABLES view in the following manner:
The query on the USER_TABLES view shows public statistics only, and thus there is nothing showing that the database actually collected any statistics for the product_descriptions table.However,if you query the USER_TAB_PENDING_ STATS view,which holds information about private statistics,you’ll see some thing else:
The USER_TAB_PENDING_STATS view shows pending statistics for tables, partitions,and subpartitions owned by the user issuing the command.
Testing,Publishing, and Deleting Pending Statistics
You can use the export_table_statistics procedure to export the pending statistics to a different data base and test them there.If the query resultsaren’t encouraging with the pending statistics,you can delete pending stati stics by executing the delete_pending_stats procedure, as shown here:SQL> exec dbms_stats.delete_pending_stats('HR','EMPLOYEES');
You can choose to make pending statistics permanent,that is, publish them,by using the publish_pending_statistics procedure, as shown in this example:SQL> exec dbms_stats.publish_pending_stats('HR','EMPLOYEES');
If you want to publish all statistics collected for a particular schema,you can do so by execut ing this procedure:SQL> exec dbms_stats.publish_pending_stats('HR',null);
You can publish all pending statistics in the database by executing thefollowing procedure:SQL> exec dbms_stats.publish_pending_stats(null, null);
By default,private statistics are turned off, as shown by the default setting of false for the new Oracle Database 11g initialization parameter
SQL Plan Management (SPM)
It’s not uncommon at all for DBAs to confront situations where awell-functioning system encounters performance problems because of a regression in the execution plans of SQL statements. Execution plans for SQL statements could change for a number of reasons,including the upgrade of the Oracle database or the operating system software, changes in optimizerversions, optimizerstatis tics or optimizer parameters, data changes, schema definition and metadata changes, deployment of new applicationmodules, adding and dropping indexes, and so on.In earlier versions,Oracle provided features such as stored outlines and SQL profiles to enable the stability of execution plans over time. However,all these required you, the DBA, to decide to use and implement the plan stability features.
Oracle Database 11g introduces a new plan stabilization feature called SQL Plan Management that lets the database control the evolution of SQLexecution plans through SQL plan baselines.The goal of SPM is to preserve the per formance of your SQL code in the face of changes such as database upgrades, system and data changes, and application upgrades and bug fixes.You can currently use SQL profiles through Oracle’s automatic SQL tuningfeature to produce welltuned execution plans, but that is a purely reactive mechanism,which doesn’t help you when there are drastic changes in your system.Oracle has designed SPM as a preventative mechanism that aims to preserve the per formance of SQL statements in the face of numerous types of changes that we listed earlier.Whereas a SQL profile contains statistics that will improve the execution plan for a SQL statement,a SQL plan baseline contains hints for the optimizer to generate a better execution plan for a SQL statement.
Once you enable this new plan stabilization feature,new execution plansgenerated by changes in factors such as optimizer statistics,for example,are accepted only if they don’t lead to a deterioration in performance.The cost-based optimizer manages the SQL plan base lines and uses onlyknown and verified plans. After verification, the optimizer uses only com parable plans or plans that yield better per formance than the current execu tion plans.SPM builds SQL plan baselines of known good execution plans.SQL Plan Management aims to replace the plan stability feature in earlier releases of the Oracle database.The older stored outlines feature is deprecated in this release,and Oracle recommends that you migrate your stored outlines to SQL plan baselines.
The database stores an execution plan as a SQL baseline,thus maintaining the plan for a set of SQL statements.Provided the database has the necessary optimizer statistics, the optimizer first evolves a best-cost plan and then tries to match it with a plan in the SQL plan baseline.If the optimizer finds a match, it uses this plan.Otherwise,the optimizer will evaluate all the accepted plans in the SQL baseline and choose the lowest cost plan for executing the statement.
SPM involves three distinct steps or stages—capturing,selecting,and evolving SQL plan base lines.The following sections summarize the three phases.
Capturing SQL Plan Baselines
The first phase involves capturing the SQL plan baselines.To support SPM and the performance verification of newly evolved execution plans,the optimizer maintains a plan history for each repeatable SQL statement containing the different execution plans generated for that statement.The optimizer considers a SQL statement repeatable if the database parses or executes that statement multiple times after the optimizer logs the statement in the SQL statement log that it maintains.
You can load the SQL plan outlines automatically or use manual bulk-loading procedures.First we’ll discuss the automatic approach to creating andmaintaining plan history for SQL statements,using the optimizer-provideddata.Then we’ll show how to do this manually.
Automatic SQL Plan Capture
You can enable automatic loading of SQL plan baselines by setting theinitialization parameter optimizer_capture_plan_baselines to true,as shown here:SQL> alter system set optimizer_capture_sql_plan_baselines=true; System altered.
The default value of the optimizer_capture_sql_plan_baselines parameter is false,meaning automatic plan capture is turned off by default.By setting the parameter’s value to true in the parameter file,or with the alter system or alter session statement, you let the database auto maticallystart recognizing repeatable SQL statements and create a plan history for them.
The database uses the optimizer details about each SQL statement toreproduce the execution plan for it.The first plan for each SQL statement is marked for use by the optimizer.At this point,both plan history and the SQL plan baseline are identical, since there’s only a single SQL statement to consider.All new plans that theoptimizer generates subsequently for a SQL statement become part of the plan history. During the final SQL plan baseline evolution phase, the database will add to the SQL plan baseline any plan that’s verified not to lead to a performance regression.
Manual SQL Plan Loading
In addition to this automatic tracking of repeatable SQL statements,you can also manually seed plans for a set of SQL statements (SQL tuning set).You can supply the SQL plans as an alternative or in addition to the automatic capture of SQL by the database. An important distinction between automatically capturing SQL plans and manually loading SQL plans is that the database adds all manually loaded plans to a SQL plan baseline as acceptedplans, without subjecting them to verification for performance.
You can use the new DBMS_SPS package to load SQL plans directly into a SQL plan baseline.You can use a cursor cache as the source of the SQL plans or load the plans from SQL tuning sets and AWR snapshots. In the following example, we show how to load SQL plans manuallyfrom an STS.Make sure you create and load the STS first, before executing theload_plans_from_sqlset function, as shown here:
The previous function loads all SQL plans from an STS into SQL plan baselines.The plans aren’t verified,as we mentioned earlier,and are added as accepted plans to an existing or new SQL baseline.If you want to load plans from the AWR, first load the SQL plans from the AWR into the STS before executing the load_plans_from_sqlset procedure.You can alternatively load any SQL plans you want from the cursor cache using the load_plans_from_cursor_cache function, as shown here:
this example,we load a plan from the cursor cache by providing its SQL ID.Instead of the SQL ID,you can also use the SQL statement text or the SQL handle to load a SQL statement into a SQL plan baseline.
SQL Plan Baseline Selection
During the selection phase, the database automatically selects SQL plans for every SQL statement that avoids a performance regression.The cost optimizer adopts a conservative plan selection strategy that works in this way:each time the database compiles a SQL statement, theoptimizer will build a best-cost plan and try to match it with a plan in the SQL baseline and then use that plan if there is a match.If the optimizer doesn’t find a match in the SQL baseline,it selects the cheapest plan from the SQL baseline for execution. In this case, the optimizer’s best-cost plan (which doesn’t match any of the plans in the plan history) is added as a new plan to the baseline.The database doesn’t use this new plan automatically since it is as yet a non accepted plan.The database can use this plan only after it verifies that it doesn’t lead to performance regression or if it’s the only available plan.
Evolving SQL Plan Baselines
A SQL plan baseline can’t remain static over time; it has to evolve with the changes in your system by analyzing the good and the not so good plans over a period of time.The database doesn’t automatically implement all SQL plans generated by the cost-based optimizer.A SQL plan is deemed an acceptable plan only after verification of its performance as compared to the SQL plan baselineperformance ,and the SQL plan baseline consists of all the acceptable plans for a specific SQL statement Evolving a SQL plan means changing anonaccepted plan into an accepted plan.
The original plan baseline will always consists of at least one plan, which,ofcourse,will be the first execution plan generated by the optimizer for a given SQL statement.The database continuously evaluates new nonaccepted plan performance by comparing the plan performance with a plan selected from the SQL plan baseline.All the plans that don’t lead to a performance regression are changed into accepted plans and integrated into the SQL plan baseline.
There are three ways to evolve a SQL plan baseline—manually by running the SQL Tuning Advisor task,by using the DBMS_SPM package, and by using the SQL Tuning Advisor.We’ll review the three methods briefly in the following sections.
Manually Loading Plans into the Baselines
As described earlier,when you manually load SQL baselines by using SQL tuning sets or the cursor cache as the source for the statements,the plans you load are considered accepted plans,which makes them equivalent to a verified plan.
Using the DBMS_SPM Package to Evolve Plans
The function DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE evolves base linesassociated with one or more SQL statements.The function compares the execution performance of a nonaccepted plan, and if it finds the performance better than the SQL plan baseline performance, it changes the nonaccepted plan into an accepted plan. Here’s an example:
You must grant a user the administer_sql_management object privilege so they can executethe evolve_sql_plan baseline procedure.
Evolving Plans with the SQL Tuning Advisor
When you run the SQL Tuning Advisor,it may sometimes recommend the acceptance of a SQL profile when the advisor runs across a plan whoseperformance is better than any plan in the SQL plan baseline.If you implement the recommendation by accepting the SQL profile,the SQL Tuning Advisor automatically adds that plan to the SQL plan baseline.In Oracle Database 11g,as we explained in the SQL Tuning Advisordiscussion,the Automatic SQL Tuning Advisor task that runs nightly during the maintenance window automatically implements the SQL profilerecommendations made by the SQL Tuning Advisor.Thus,any tuned high-load SQL statements automatically become part of the SQL plan baselines for those statements.
Displaying SQL Plan Baselines
You can examine the SQL plan baselines for any SQL statement using the DISPLAY_SQL_PLAN_BASELINE function of the DBMS_XPLAN package,as shown here:
As shown here,the function DISPLAY_SQL_PLAN_BASELINE displays andprovides information about the baseline plans.
Managing the SQL Management Base
The optimizer is in charge of automatically managing the SQL management base (SMB),which is part of the data dictionary and stores the statement log maintained by the optimizer.The SMB also stores the SQL plan histories and the SQL plan baselines.Unlike the data dictionary,which is stored in the system tablespace, the database stores the SMB in the sysaux tablespace.You can configure the SMB to occupy anywhere between 1 and 50 percent of the size of the sysaux tablespace;the default allocation is 10 percent.A back ground process that runs once a week will issue an alert if the SMB exceeds its space allocation.You can increase the space allocation percentage,increase the size of the sysaux table space, or purge SQL plan baselines and SQL profiles from the SMB to make more room for new data.
Use the new view DBA_SQL_MANAGEMENT_CONFIG to see the configurationparameters of the SQL management base.There are two configuration parameters you can modify,space_budget_percent and plan_retention_weeks.The space_budget_percent parameter shows the current SMB space limit in the sysaux tablespace, and the plan_retention_weeks parameter shows how long the database will retain the data in the SMB before automatically purging it.You can view the current values for both of these parameters by issuing the following statement:
This query shows the default values for the two configuration parameters related to the SQL management base.The database will retain unused plans for 53 weeks before purging them.You can also purge individual plans with the help of the purge_sql_plan_baseline function.You can change either of the two configuration parameters by using the configure procedure of the newDBMS_SPM package.The following example shows how to change the setting for the space_budget_percent parameter:SQL> execute dbms_spm.configure('space_budget_percent', 30);
A scheduled purging job that runs as an automated task in the maintenance window takes care of purging unused plans.The purging job automatically re moves any execution plan that the optimizer hasn’t used in more than a year.
You can,however,change this default purging behavior and specify that a used plan not be purged for a period as long as 10 years,as shown in the following example (520 weeks is 10 years):SQL> execute dbms_spm.configure ('plan_retention_weeks', 520);
The execution of the configure procedure in the example ensures that all execution plans in the SQL management base are retained for 10 years.
Managing SQL Plan Baselines
You can use the DBA_SQL_PLAN_BASELINES view to see the current SQL plan baselines for SQL statements.Besides various SQL plan and SQL identifiers,execution statistics,and timestamps the view contains the following key columns:
Here’s an example showing how you can get details about plan baselines from the DBA_SQL_PLAN _BASELINES view:
The query shows whether the plan was captured automatically or manually,as well as whether it is accepted and enabled.
You can alter any of the attributes of a SQL plan baseline by using theDBMS_SPM. ALTER_SQL_ PLAN_BASELINE function.You can also import and export SQL plan baselines using Oracle Data Pump and procedures from the DBMS_SPM package.
In Oracle Database 10g,Oracle introduced the Automatic Database Diagnostic Monitor,which uses the automatic workload repository snapshots to provide diagnosis regarding SQL statements, IO, and CPU usage in the database.
The ADDM is based on the single-minded goal to reduce “DB time. ”ADDM provides you with quick results that help pinpoint the root cause of aperformance problem,besides telling you what the nonproblem areas are,so you don’t waste your time chasing down unproductive avenues.
Instance ADDM and Database ADDM
You can run the ADDM in two distinct modes in Oracle Database 11g.In the first mode,called the instance ADDM, the monitor provides traditional instance-level performance evaluation.
The instance ADDM is what you have in Oracle Database 10g.Oracle calls the new ADDM mode introduced in the new release database ADDM, and its goal is to analyze and report on Oracle RAC installations.
In this mode,the ADDM report will contain an analysis of the cluster as a unit besides an analysis of each of the instances that are part of the the cluster.Oracle’s target for the database ADDM is database administrators rather than application developers,for whom the instance ADDM is more relevant.
The instance ADDM runs automatically after the database takes each AWR snapshot,and it produces the same reports that it did in the Oracle Database 10g database.Depending on whether you run the ADDM in global or instance mode, you’ll have global findings or local findings.
The global ADDM is appropriate only for an ORAC setup,where the tool can analyze the performance of the entire cluster together as one unit.For example,the DB time is computed as the sum of the database times for all instances in the ORAC cluster.
Global ADDM also reports facts such as CPU consumption and serverperformance at the aggregate cluster level,thus giving you an overall view of how the ORAC installation is performing.You can also perform a partial analysis of just a subset of the instances in a cluster.
This is known as the partial analysis ADDM.If you are concerned about the performance of only some but not all instances in a cluster, partial analysis ADDM will provide results faster than an exhaustive database ADDM analysis.
Running ADDM Over a Specific Time Period
You can now run an ADDM task for any specific range of time you want, instead of being able to run it only over a pair of snapshots.This is a great help,especially when there is a hung database or a sharp spike in activity in the database that hasn’t been captured yet by an AWR snapshot.
Specifying the Type of ADDM Task
You can use the DBMS_ADVISOR package’s set_default_task_parameterprocedure to specify the type of ADDM analysis you want to run.You set the values for the instance and the instance’s parameters to select among a database ADDM,instance ADDM,or partial analysis ADDM.
The various advisors that are part of the management advisory framework issue findings.Oracle Database 11g names and classifies the advisor findings, making it easier for you to keep track of the frequency with which various findings occur. There is a new view named DBA_ADVISOR_FINDINGS that lets you look up all finding names. Here’s a query that lets you view all findings in the database:
There is also a new FINDING_NAME column added to theDBA_ADVISOR_FINDINGS view,which lets you make queries such as the following:
Classification of findings applies to all types of advisors,not just the ADDM,and helps you find the frequency with which different findings areissued.
Using the New DBMS_ADDM Package
The DBMS_ADDM package is a new package in the Oracle Database 11gRelease 1, designed to make it easier to manage ADDM. Here’s a briefsummary of the main procedures and functions of this package:
Note that you must pass the beginning and ending snapshot IDs when creating any type of an ADDM task. Here’s an example of an instance ADDM execution with a pair of snapshots.
To get a text report of the previous task,run the get_report procedure, as shown here:
The DBMS_ADDM package lets you add various directives when you create a new ADDM task to help you control the work of the ADDM. Here’s a description of the various procedures you can use to create directives to limit the work of the ADDM:
Here’s an example that shows how to create an ADDM directive to filter out findings relating to undersized instance memory:
The example shown here creates an instance ADDM task with a findingdirective that limits the display of the “Undersized instance memory” finding only if that finding is related to at least five active sessions during the time of the ADDM analysis.
In addition the finding must be responsible for at least 10 percent of thedatabase time during the analysis period.You can delete any directive you create with the DELETE_* procedures.For example,the delete_finding_directive procedure will delete a finding directive.
The AWR generates automatic snapshots of performance data on an hourly basis.You can also manually generate snapshots with the
DBMS_WORKLOAD_REPOSITORY package.The ADDM then uses the data thus collected in the AWR.
In Oracle Database 10g, you learned how to use AWR baselines, which provide a frame of reference for comparing performance and advisor reports between two periods.
An AWR baseline is simply database performance data from a set of AWR snapshots,representing database performance at an optimum level.When a performance problem occurs,the database compares performance data from a previous “good” period with the performance of a similar workload.
You create a baseline with the create_baseline procedure of theDBMS_ WORKLOAD_REPOSITORY package and drop it by executing the drop_baseline procedure.
You could create and delete only simple fixed baselines in Oracle Database 10g. A fixed baseline uses a fixed period that you specify. Oracle Database 11g adds more sophistication to the management of AWR baselines by providing the capability to create more complex and productive baselines.Oracle Database 11g contains the following enhancements relating to AWR baselines:
You can still use the AUTOMATIC_WORKLOAD_REPOSITORY package’sprocedures create_snapshot, drop_snapshot, modify_snapshot settings,create_baseline, and drop_baseline to manage snapshots and baselines.
In Oracle Database 11g,you have access to new procedures that let you manage the added functionality of the AWR baselines.We’ll briefly describe the AWR baseline enhancements in the following sections.
Moving Window Baselines
A moving window baseline doesn’t require you to specify a pair ofsnapshots.The range of the baseline keeps “moving” or changingcontinuously.By default, Oracle automatically creates a system-defined moving window named system_moving_window, whose time span correspond. to the entire AWR retention period, which is eight days by default now (instead of seven days).
The default moving window baseline will always use AWR data from the previous eight days.Here’s how you can view the moving window baseline information.
Note that the eight-day span for system_moving_window is a default value, which you can modify.However,the time range for this window must be always the same as the AWR retention period.
Therefore, if you want to specify a 30-day time span for thesystem_moving_window, you must first raise the AWR retention period to 30 days, up from its default of eight days. (In Oracle Database 10g, the default was seven days.) Use the new modify_baseline_window_size procedure of the DBMS_WORKLOAD_REPOSITORY package to modify the size of the default moving window:
The window_size parameter sets a new window size for the default moving window baseline. In this case,we set the new window size to 15 days, which is greater than the default AWR retention period of eight days.You can set only a window size that is equal to or less than the AWR retention setting.In cases such as this, where you want to specify a window size greater than the default AWR retention setting,you must first execute the modify_snapshot_settings procedure to change the AWR retention setting.
Oracle Database 11g provides the ability to create baseline templates,which enable you to create AWR baselines automatically for a specific period, whether that period is in the past or in the future.
The big advantage to using a baseline template to create baselines is that you don’t need to supply the starting and ending snapshot identifiers for the create_baseline procedure.Oracle Database 11g provides you with two types of AWR baselines: single and repeating.
A single baseline is static in nature.A repeating baseline is dynamic and changing since it captures performance over changing periods.Oracle also provides you with baseline templates to create both types of AWR baselines:
Generating a Template for a One-Time Baseline
The following syntax enables you to specify a template for the creation of a single AER baseline in some future period:
Note that you can use the expiration attribute to set the time (in days) for which Oracle must maintain a baseline.The default value is null, meaning the database will never drop a baseline.
Generating a Template for a Repeating Baseline
You can specify a template for creating a repeating baseline using thefollowing procedure:
You can use the drop_baseline_template procedure to drop either a one-time or a repeatingbaseline template.
Ability to Rename Baselines
You can use the new rename_baseline procedure to rename an AWR baseline, as shown in thefollowing example:
The dbid parameter, as in the earlier example,is optional,and the database identifier for the local database will be used as the default value for dbid if you don’t provide one.
Ability to Set Expiration Dates for Baselines
When you use the create_baseline procedure to create a new AWRbaseline,you can specify the expiration time for that baseline by setting a value for the new expiration attribute of the procedure.
The default value of null for the expiration attribute means that a baseline will never expire, which was the only behavior possible in Oracle Database 10g.
Baselines and Template Dictionary Views
There are several new DBA views to help you manage AWR baselines and templates.The three views that will help you the most are as follows:
Adaptive Baseline Metric Thresholds
The Oracle database uses performance alerts to warn you about any violations of thresholds for various activities in the database. However, specifying when the database should issue an alert is a complex task.The setting of alert thresholds is a difficult task because the expected metric values can vary based on the instance load and workload type.
In the Oracle Database 11g release,the database uses the AWR baseline metric statistics to determine alert thresholds.Once the database computes baseline statistics for a specified base line, you can align the metric thresholds with these AWR baselines.These thresholds are termed AWR baseline metric thresholds. Using AWR baselines as the source for the metric statistics lets you specify adaptive thresholds for database performance.The new metric thresholds are called adaptive since they’re based on thesystem_ moving_window baseline, which is by definition constantly changing.
You can define AWR baseline alert thresholds either through the PL/SQLinterface or through Database Control. The following steps show you how to use Database Control to configure adaptive thresholds:
Oracle 11g Related Interview Questions
|Oracle 10g Interview Questions||Oracle 9i Interview Questions|
|Oracle 8i Interview Questions||Informatica Interview Questions|
|PL/SQL Interview Questions||Oracle 11g Interview Questions|
|SQL Interview Questions||Oracle apps Interview Questions|
|Sybase Interview Questions||Oracle Apps ERP Interview Questions|
|Oracle 7.3 Interview Questions||Oracle Access Manager Interview Questions|
|Oracle Application Framework Interview Questions||Oracle Apps DBA Interview Questions|
Oracle 11g Related Practice Tests
|Oracle 10g Practice Tests||Oracle 9i Practice Tests|
|Oracle 8i Practice Tests||Informatica Practice Tests|
|PL/SQL Practice Tests||Oracle 11g Practice Tests|
|SQL Practice Tests||Oracle apps Practice Tests|
|Sybase Practice Tests||Oracle Apps ERP Practice Tests|
|Oracle 7.3 Practice Tests|
Oracle 11g Tutorial
Installing, Upgrading, And Managing Change
Database Diagnosability And Failure Repair
Backup And Recovery
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.