Real Application Testing - Oracle 11g

Adopting new technologies is too often a double-edged sword,making you more efficient and thus providing a competitive advan tage while simultan eously introducing uncertainty and potential instability into key production systems.

Change assurance,which involves making sure major changes such as new soft ware releases and data base upgrades don’t negatively impact perfo rma nce,has always been a prime concern for Oracle application developers and data base administrators.Even if you can simulate real production work loads,the effort is just that—a simulation,not the real deal.

In a world that is technologi cally advancing at a mind numbing pace,you need to know which of the techno logies have the poten tial to benefit you;there fore,you need to perform real testing with real data in real conditions.

Oracle Data base 11g places considerable emphasis on the pro active testing of changes by making change assurance one of the cornerstones of the new release.It does this through the Real Application Testing feature;this feature encompasses two components,Data base Replay and the SQL Performance Analyzer,that dramati cally reduce the risks inherent in adopting changes by offering you a realistic method of testing changes using real-life work loads. The tools unearth problems and give you the opportunity to fix them before you actually introduce the changes into your production systems.Here’s a brief summary of the two key components of Real Application Testing in Oracle Data base 11g:

  1. DatabaseReplay: You can use the Data base Replay feature to test the impact of the data base up grade on the production work load by capturing and replaying the production work load on a test system before you actually perform the up grade on your production data base.Using the Data base Replay reports from a test server,you can fix poten tial problems before they occur on the production database.
  2. SQL Performance Analyzer (SPA): You can use the SQL Performance Analyzer to predict the impact of any change,such as an up grade to a new release,on the SQL workload as captured by a SQL tuning set.By knowing ahead of the actual up grade about any adverse impact on performance and the root cause for it,you can prevent it from actually occurring in a production data base after a database upgrade.We discuss the SPA later in this chapter.

In addition to the Data base Replay and the SQL Performance Analyzer fea tures,there’s also a third new feature pertaining to change management, called SQL Plan Management, which replaces the stored outlines feature in earlier releases.The SQL Plan Mana gement feature relies on the use of SQL plan baselines,which represent efficient execution plans.When you adopt SQL Plan Management pursuant to a database upgrade, only those SQL plans are used that don’t result in a performance regression.

Database Replay

One of the major problems you face during an Oracle server software up grade process or an application up grade is the difficulty in simulating the actual production work load on the test data bases. This is also true when you’re moving to a totally new database configuration, say from a regular operating system file system to automatic storage management.

Even if you use sophisticated testing suite software, it’s not easy to accurately reproduce the true workload of a production data base.Consequ ently, you are forced to test in an unrealistic setting and take your chances when you move to the new release of the server software or the application. It’s not at all uncommon for DBAs and developers to bemoan the fact that the testing folks couldn’t adequately “stress test” the changes before they were approved for the switch to production.

Database Replay

The Data base Replay feature provides a solution to the vexing problem of reproducing production conditions in the testing and migration environments. By making it significantly easier to test potential data base changes,Oracle Data base 11g lowers the cost of data base up grades as well as other major changes such as operating system and storage system upgrades.

Testing that usually would take months when done by scripts and tradi tional load simulation tools can be done at dazzlingly fast speeds now with the Data base Replay feature. Data base Replay lets you capture the actual work load on a production system and analyze it the same way by replaying it on a test system.

The goal is to replicate the production environment in toto on a testsystem.Since the characteristics of the original work load such as concurrency and timing are maintained during the replay,you’re essentially working with the same type of resource contention and other characteristics.This lets you easily identify any negatives that’ll be potentially introduced by making the application,system,or software changes.

The goal is to make sure the change you’re making,such as a data base up grade,gets you only desirable results.Note that your test system must be run ning on the same or a newer version of the Oracle Data base compared to the production system.

The key fact you must understand here is that Data base Replay captures only the work load at the database level and ignores all client,application, and middle-tier interactions.

The replay will capture the impact of any system changes that affect the data base,such as an up grade of the data base itself,an upgrade of the oper ating system,or a switch to a new disk storage system.The production data base is backed up and restored on a test system with identical configuration and environment.Your goal is to replicate the production system with the same application state as the original.

You can use Database Replay for testing the following types of changes:

  • Database upgrades.
  • Operating system upgrades.
  • Storage system changes.
  • Configuration changes such as switching to an Oracle Real Application Cluster.

Using data base Replay to replay a production workload consists of four steps:

  1. Workload Capture records the production database workload.
  2. Workload Preprocessing makes the captured workload replayable by converting it into replay files.
  3. Workload Replay replays the production workload in the test data base with actual timings,following the changes you want to test.
  4. Analysis and Reporting deals with error reporting as well as reports on data divergence and performance divergence bet ween the production and test environments.You can also enlist the ADDM for a deeper performance analysis.

Oracle provides an interface called the Workload Replay Client to let you interface with the Work load Replay feature using the command line.To access the Workload Replay Client,type wrc at the operating system command line, as shown here:

The first thing to note is that wrc can work in different modes to perform various tasks. The default mode is REPLAY,which we’ll use to replay the workload that we capture.

However,it may be necessary to run wrc in calibrate mode first to estimate the number of replay clients and servers (CPUs)needed to replay the captured work load.By default,the connection_ ove ride parameter is set to false, meaning that all replay threads connect using the server settings in the DBA_WORKLOAD_CONNECTION_ MAP table.

Each thread of the multi threaded wrc (replay client) program submits a workload for a single session for replaying.You must first connect with the replay client to the test system before starting the workload replay, as we show later in this chapter.

The Work load Replay Client will connect to the replay system and send database requests that are part of the captured data base workload.You can manipulate several options of the wrc to control replay behavior,including scaling the login time and think time up and down.You can put these options to use for load and stress testing the production system on the test data base.

Types of Data Captured in the Workload

The workload you capture from the production system consists of the following types of SQL calls:

  • Data Manipulation Language (DML) statements
  • Data Definition Language (DDL) statements
  • Session control calls such as alter session
  • System control calls such as alter system

Any back ground activities and scheduled jobs aren’t part of the data capture.Client requests such as a direct path load of data from external files using SQL*Loader,data base links,external tables,Oracle Streams,non-SQL-based object access,and distributed transactions are the types of data that aren& rsquo;t captured as part of the data base work load.

Oracle recommends you use the Oracle Enterprise Manager (Data base Control or Grid Control) to capture and replay the workload data.

However,you can also use APIs available through the new Oracle packages DBMS_WORKLOAD_CAPTURE and DBMS_WORKLOAD_REPLAY to perform the workload capture and replay that are at the heart of the Database Replay feature.You use the procedures from the DBMS_ WORKLOAD_CAPTURE package to capture the production workload and the procedures from the DBMS_WORKLOAD_REPLAY package to replay the captured production workload on the test system.

You must have either the dba privilege or the execute_catalog_ role privilege to use the two packages.In this chapter, we show you how to use the various procedures of these two key new Oracle packages to work with the Database Replay feature.

Before you can capture the workload, you must create a directory that’ll hold the captured workload data.You must also back up the database before the capture process begins so you can replay the captured data on a system with an identical appli cation state as that of the source database.

You can use filters (with the add_filter procedure) to avoid capturing some user sessions such as the Oracle EnterpriseManager sessions or DBA actions, which aren’t deemed part of the database workload for the purposes of change management.

Types of Data Captured in the Workload

We briefly describe how to perform each of the four key steps of Database Replay in the following sections.

Capturing the Workload

During the workload capture phase, Database Replay will capture all requests made by external clients to the Oracle database. These external client requests primarily include calls made by applications such as SQL*Plus and middle-tier components.

Information pertaining to calls made by the external clients, such as SQL text and bind values, are collected and stored in what are known as capture files in a location chosen by you (in binary format).

Before you can start capturing the workload, you must do the following:

  1. Back up the production data base so you can restore it on the test system later for replaying the work load.The goal is to reproduce the application state of the production system by restoring the data base before you replay the captured work load. You can use RMAN,a point-in-time recovery, Data Pump Export/ Import,or a snapshot standby to back up the production data base before starting the data capture.
  2. Decide whether you want to restart the production database. You don’t have to restart the production database before starting the workload capture,but note that any transactions that are underway when you begin the workload capture may not be captured.
  3. If you decide to restart the production data base,you must do so in restricted mode after logging in as the user sys.Once you start the capture process as shown here,you can open the data base for all user sessions.
  4. Create a directory with enough space to store the captured workload.Use the create directory SQL statement to create the directory object.
  5. Determine whether some of the user sessions, such as DBA sessions, don& rsquo;t need to be captured as part of the work load.You can use work load filters to eliminate these sessions from the work load,as shown in the follo wing example,which uses the add_ filter procedure to filter parts of the workload from the data capture:

You can remove any filters you create by executing the delete_ filter proce dure.Use the start_capture procedure of the DBMS_ WORK LOAD package to capture the data base workload during a representative period of operation.
Here's a simple example showing how to capture a database workload:

Here’s how you remap the external connections using the remap connection procedure:SQL>exec dbms_workload_ replay.remap_connection (connection_id =>111, replay_ connection => 'prod1:1521/mydb');
The connection_id parameter signifies a connection from the workload capture (generated during replay data initialization),and the replay_connection parameter(an optional parameter) specifies the new connection string to be used during the workload replay.If you leave the replay_connection parameter ’s value set to null (the default value), all replay sessions will connect to the default host,which is determined by the replay client’s runtime environment.

The name parameter specifies the name of this workload capture process.the dir parameter refers to the directory object pointing to the directory you created for storing the captured workload data.

Although you can stop the workload capture with the procedure stop_capture, here we use the duration parameter instead to limit the workload capture to 4 hours (240 minutes).If you choose to stop the workload capture before the end of the 4-hour duration, you can do so in the following way:

SQL>execute dbms_workload_capture.finish_capture ();

The execution of the finish_capture procedure ends the workload capture.It’s a good idea at this point to generate a workload capture report to validate your data capture.

Validation includes ascertaining that you did capture the correct workload and didn’t miss any critical part of the produ ction workload by acci dent.Of course, you must also make sure you can replay the captured data on the test server later. Execute the DBMS_ WORKLOAD_CAPTURE.GET_CAPTURE_INFO procedure to generate a work load capture report, as shown here:

The procedure shown here produces a report in a text format.The report shows you a profile of both the work load that was cap tured and a profile of the workload that was excluded by using filters or that couldn’t be captured because of version limitations.

You also get statistics such as the number of logins and transactions the workload captured.Although the workload capture report is useful in many ways, it’s the work load replay report that you’ll see later that&rsquo ;s really of more concern to you, because it tells you about the data and per formance differences between the original system and the replay system.

Once you finish capturing the production system workload, it’s timeto prepare the work load for the replay on the test system,as shown in the next section.

Preprocessing the Data

Before you can replay the production data on a test server, you must first preprocess the data and initialize it.Preproces sing the workload entails converting the data into replay files, as well as creating the meta data necessary for replaying those files.

Of course,you must preprocess the work load on a system that’srunn ing the same version of the Oracle Data base as the one from which you captured the data base workload.

Preprocessing essentially involves creating replay files from the processed data,which you can then replay on the test system.You process the workload in the test database where you’re plan ning the replay.You can preprocess the data on the production system itself,but since preprocessing is a resource-intensiveprocess,you’re better off doing it on the test system.Proce ssing the workload sets the stage for replaying the workload;it does this by trans forming the work load into replay files.

To preprocess the workload data, you must first move the capt ured work load from the production system to a directory on the test system.So,create a directory object for this directory on the test system beforehand. Use the process_capture procedure to process the captured workload data:

The process_capture procedure has only a single parameter, capture_dir, which points to the directory object holding the captured workload data.Once you preprocess the workload data, you can replay it on the same or a newer release of the Oracle database.

Performing the System Change

Now that you have moved the captured work load from the pre- change produ ction system to the test system,make the system changes that it’s your goal to test.For example,if you’re plan ning to test a data base up grade,perform that up grade now so you can replay the captured work load from the pre-change database to test the database upgrade.

Replaying the Workload

Before you start the work load replay,restore the test database so it reflects the same application state as the production data base.

You must also ensure that all external linkages and objects such as database links,external tables,directory objects,and URLs present in the production system are present in the test system as well.

Oracle recommends you reset the system time on the test replay system to the time when you started the work load capture to avoid invalid data sets when dealing with workloads containing time-sensitive data,as well as to eliminate any potential job- scheduling issues.

You may want to start the data base you restored in a restr icted mode to avoid accidental changes to data during the work load replay.Re playing the workload requires several steps,which we summarize in the following sections.

Resolving External References

You must resolve all external references such as data base links that are part of the captured work load.You must either disable or reconfigure all external references in the workload,if necessary,to make them accessible from the test system.These external references include data base links, external tables,directory objects,and URLs.

Initializing the Replay Data

The next step is to initialize the replay data.Initializing the data,which you perform through the initialize_replay procedure,loads the metadata into tables,which will be used by the work load replay process:

The replay_name parameter refers to the name of the replay.The replay_dir parameter points to the directory containing the capt ured workload that you want to replay on the test system.

Remapping External Connections

Once you initialize the work load data,you can query dba_work load_conne ction_map to view the connection mappings,if any exte rnal connections were made by the source (production) data base users during the workload capture.

You must now remap the conne ction strings that were utilized during the workload capture so the individual user sessions can connect to the external data bases.You remap the external conne ction strings using the remap_ connection procedure.

The replay clients must connect to the replay system in order to replay the work load.On singleinstance systems(not Real Appli cation Clusters),the conne ction strings between the capture and the replay systems are mapped one-to-one.

Here’s how you remap the external connections using the remap_ connection procedure:

The connection_id parameter signifies a connection from the work load capture(generated during replay data initialization),and the replay_connection parameter(an optional parameter)specifies the new connection string to be used during the workload replay.If you leave the replay_connection para meter& rsquo;s value set to null(the default value),all replay sessions will connect to the default host,which is determined by the replay client’s runtime environment.

Remapping External Connections

Starting the Replay Clients

Before you can replay the work load,you must ensure that the replay clients connect to the test data base.Each thread of the replay client(started by the executable wrc) submits a workload from a session.Once you process the work load data,you must start the replay client(s).

The replay client connects to the database and drives the replay of the workload data.Before you can start the replay clients, make sure the replay directory contains the(preprocessed) workload files,the replay clients have access to that directory,and the replay user has the correct credentials to connect.

The wrc executable is run by default in replay mode.However, first run the wrc executable in calibrate mode to estimate the number of replay clients and hosts to replay your captured work load.

Each replay client can handle multiple user sessions.If,for example,the captured workload was generated by 200 user sessions and a single host can handle only about 120 sessions,you’ll need two hosts,with a replay client running on each of those hosts.This means you must install wrc on both the hosts.Here’s how you run the wrc executable in calibrate mode:

$ wrc system/<system_password>mode=calibrate replay_dir=./test_dir

Once you figure out the number of hosts and replay clients you need to replay the actual workload, you then start the necessary replay clients by executing the wrc executable in the (default) replay mode, as shown here:

$ wrc system/<system_password> mode=replay replay_dir=./test_dir

By default,the connection_override parameter of the wrc executable is set to the value of false, meaning that all replay threads will simply use the connection mappings in the DBA_WORKLOAD_CONNECTION_MAP view to connect.

Preparing the Workload Replay

Before you start the actual workload replay, you must specify various workload replay options.The following are the three replay options you can configure to control the database replay:

  • Mode:By default,a workload replay is performed in synchron ization mode.If you think the workload is primarily composed of independent transa ctions,you must disable synchronization mode explicitly,because data diver gence during the replay under these circumstances isn’t something to worry about.
  • Connection Time Scale: You can use the connect_time_scale parameter to calibrate the time between the workload capture and the session connect time during the replay.Use this parameter to adjust the number of concurrent users higher or lower during the workload replay.
  • Replay Speed:To correct for longer time for completion of user calls during the work load replay, use the think_time_auto_ correct parameter.

The think_time_auto_ correct parameter helps you adjust the concurrency level during the workload replay. To correct for elapsed time between user calls during the replay,use the think_time_scale parameter.if the replay is prog ressing slower than the data capture,by default the think time component of the replay will be automatically reduced (think_time_auto_correct is set to true by default).

Use the prepare_replay procedure to prepare the workload for the replay on the test system,including setting the replay options discussed earlier,as shown here:

Only the synchronization parameter needs explanation.By setting the synchronization parameter to the value of false(the default value is true), you’re saying that the commit order in the workload capture may or may not be preserved during the replay. You can do this in a situation where there are numerous independent transactions that don’t have to follow a certain commit order.

Once you’ve completed all the requirements for the work load replay, it’s time to start the workload relay. Use the start_replay procedure to do this,as shown here.After making sure you have started at least one wrc re play client,issue the following command to replay the workload:

SQL>exec dbms_workload_replay.start_replay();

The start_replay procedure doesn’t require any parameters.You can cancel the replay operation using the cancel_replay procedure, if you want,as shown here:

SQL>exec dbms_workload_replay.cancel_replay();.

The cancel_replay procedure directs all replay clients (wrc) to stop submitting workload from the captured sessions.

At the end of the workload replay,all AWR snapshots that correspond to the replay time period are automatically exported.If the export of the snapshots fails for some reason,you can manually export them by executing the export_awr procedure.You can then import these snapshots into the AWR schema owned by the user sys by executing the import_awr procedure.

Analyzing Workload Capture and Replay

Once you replay the workloads on the test system,you must analyze the data replay by generating a workload replay report.To measure the data and performance differences between thecapture and the replay systems as well as a list of any errors during the workload replay,generate the workload replay report,as shown here:

The get_ replay_info function returns the history of the work load capture and all the workload replays made based on the specified directory.You can associate the capture_id value returned by this function with the capture_id column of the DBA_ WORKLOAD_ REPLAYS table.

The get_replay_info function also inserts a row into the DBA_WORKLOAD_ REPLAY table for every replay you make from the specified replay directory. Note that we chose text as the value for the replay_type parameter.You can also specify HTML and XML as the values for this parameter.Here is a sample report from the exe cution of the replay_report procedure:

Error Data
(%of total captured actions)
New errors:
7.3%
Not reproduced old errors:
1.0%
Mutated errors:
1.0%

Data Divergence
Percentage of row count diffs:
5.0%
Average magnitude of difference(% of captured):
2.5%
Percentage of diffs because of error(% of diffs):
25.5%
Result checksums were generated for 10% of all actions(% of checksums)
Percentage of failed checksums:
0.0%
Percentage of failed checksums on same row count:
0.0%

Replay Specific Performance Metrics
Total time deficit (-)/speed up (+):
12 min
Total time of synchronization:
24 min
Average elapsed time difference of calls:
0.2 sec

Total synchronization events:
3284218772

performance divergence will give you a good idea about the potential issues subsequent to major changes such as a data base upgrade to a new release.Data base Replay’s reporting tools let you get the following types of information.

  • Data divergence,which is reflected in differences in the number of rows returned by queries.
  • Errors during the workload replay.
  • Performance comparison between the workload capture and workload replay.In this example, the total time deficit is shown to be 12 minutes after the change was implemented. Obviously,things are taking longer to process after the changes were made to the database,and you must,of course,investigate this further.
  • Performance statistics as captured by AWR reports.

Note that a performance divergence could be both desirable as well as un desirable,since it’s entirely possible that the replay system uses a more recent version of the database,thus providing improved performance.

A data divergence is said to occur when the source and the replay systems return different number of rows in response to identical SQL queries or DML statements. Of course,any such data divergence merits serious investigation.

Oracle claims that Data base Replay offers significant benefits when compared to third party tools such as LoadRunner for testing large applications Data base Replay offers the great advantage that it tests virtually 100percent of the actual production workload,compared to a third-party tool’s arti ficially simulated workload that at best uses less than 10 percent of the actual workload.

Database Replay is also much faster when compared to a third-party tool, comple ting a replay of a complex application in days compared to months.In one study (Oracle Database 11g: Real Application testing and Manageability Overview,Oracle claims that it takes only half a month to test an entire e-business suite with Database Replay, compared to a total time of seven-and-a-half months with Load Runner.

Oracle provides several DBA_WORKLOAD_* views to help monitor and manage the Workload Replay feature.For example,the DBA_WORKLOAD_ REPLAYS view shows you all the work loads that have been replayed in a database.You can monitor the workload replay by using new views such asDBA_WORKLOAD_ CAPTURES and DBA_WORKLOAD_FILTERS.

The SQL Performance Analyzer

One of the major problems you encounter during an Oracle database upgrade is the difficulty of predicting the impact of a database upgrade on thefunctionality as well as the performance of the database.In Oracle Database 11g,you can perform a what-if analysis of potential data base changes andtheir impact on SQL perfor mance using the powerful SQL Performance Analyzer.

The SQL Performance Analyzer(SPA) performs fine-grained performanceanalysis of individual SQL statements and provides suggestions for solving the potential performance degradation of SQL statements consequent to a database up grade,say,from the Oracle Database release to the Oracle Data base release.

The SPA provides these recommendations by comparing performance before the upgrade and following the upgrade.You thus have the opportunity to catch those SQL queries whose performance is likely to worsen following a database upgrade and tune them so you can avoid the performance degradation. Essentially,the SPA lets you compare and analyze two versions of workload performance in order to identifuy those SQL statements that are likely to be affected by the changes you are making.

The SQL Performance Analyzer

Once the SPA identifies SQL execution plan modifications and performance regression brought about by changes such as a shift to a different optimizer level,for example,you can use the SQL Tuning Advisor to retain the original execution plans or tune the SQL statements that have regressed following a database change.

In addition to using the SQL Performance Analyzer for gauging the potential SQL performance changes after a database upgrade, you can also use it to analyze any changes in the database that can potentially impact SQL performance,such as the following:

  • Database and application upgrades.
  • Hardware changes.
  • Operating system changes.
  • Initialization parameter changes.
  • SQL tuning actions such as the creation of SQL profiles.
  • Schema changes.

You first need to decide whether you’re going to run the SQL Performance Analyzer analysis on the production system or on an identically configured test system.If you decide to go with a test system, you can use RMAN’s duplicate command to create your test version of the production database.Your test system, should you choose to go that route,must use the same database version and initialization parameters as the production database.

If you use a test system for the replay,you can capture the SQL workload on the production system,import it into a test system, and run the SQL Performance Analyzer there to compare the pre- upgrade and post-upgrade SQL performance. We recommend you use a test system to avoid the extra load on the production system.

In this example,we’ll show you how to use the SQL Performance Analyzer to help predic SQL performance changes following the upgrade of a database from the Oracle release to the Oracle release.We assume you’re using a test database to run the analysis instead of running it on the production system.

You must configure the test database as similarly aspossible to the production system to get the most out of the SQL Performance Analyzer.Oracle recommends you use Oracle Enterprise Manager to run the SQL Performance Analyzer.However,we show you how to run the tool using the PL/SQL procedures in the new package DBMS_SQLPA,which provides the interface for the SQL Performance Analyzer.

When you use this package to run the SQL Performance Analyzer, you create a SQL Performance Analyzer task to contain the results of the SQL replay trials you perform.You also use several new procedures in the DBMS_SQLTUNE package to conduct your performance analysis with the SQL Performance Analyzer.

SQL Performance Analyzer.

You can store the SQL workload that you capture in a SQL tuning set (STS). An STS is a data base object that includes the SQL text of one or more SQL statements along with information pertaining to their execution environment, the execution plan,and the execution statistics.You can also specify that the STS include the execution plans and row source statistics for the SQL statements in the STS.

Using an STS lets you collect and store the SQL information in persistent database object,which you can modify and select from later on, just as you would data from a data base table.An STS also makes it easy for you to export the SQL workload from the production system to the test system and provide the workload as input to the SQL Performance Analyzer.

You can use any of the following sources to load statements into an STS:

  • The automatic workload repository (AWR)
  • A cursor cache
  • Another STS

Once you capture the SQL statements that comprise the SQL workload on the production system in a SQL tuning set,you follow these steps to perform a SQL Performance Analyzer task:

  1. Measure the pre-change SQL workload performance:The SQL Performance Analyzer executes the SQL statements that are part of the STS that you create and generates the execution plan and execution statistics for those statements.The SPA stores the execution information in the STS.
  2. Make the system changes: After the first run of the SQL Performance Analyzer,you make the changes that you want to test on the testing system. For example,you may want to test the migration to a different release of the database by installing the new release and changing the initialization parameter optimizer_ features_enable to the new version.
  3. Measurethe post-changeSQLworkload performance:The SQL Performance Analyzer collects the SQL performance data again, after you make the data base change you’re testing.
  4. Compare the SQL performance:The SQL Performance Analyzer will compare the SQL performance before and after the changes you made.The goal is to identify changes in the execution plans of the SQL statements that are part of the captured workload. Statistics such as the execution times,buffer gets,and disk reads are some of the metrics that serve as the basis for comparing SQL performance.

Capturing the Production System SQL Workload

Your first task in running the SQL Performance Analyzer is to capture a representative SQL workload from the production system so you can analyze its performance before and after the database upgrade.

The SQL work load consists of not just the SQL statements but additional environmental information such as the SQL bind variable values and the execution frequency of the SQL statements.As mentioned earlier,you can use the AWR,a cursor cache,or an STS to load the production database SQL workload into an STS.

The following sections cover the steps you must follow to capture the production system SQL workload into an STS.

Creating the SQL Tuning Set

The first thing you must do is create a new STS,which you can then execute on the production system,to capture the workload statistics:

SQL>exec dbms_sqltune.create_sqlset(sqlset_name =>'upgrade_set', description =>'11g upgrade workload';

The name of the new STS is upgrade_set, which we’ll use to store the captured SQL workload on the production system.

The parameter sqlset_owner in the create_sqlset procedure defaults to the name of the current schema owner.Note that the create_sqlset procedure creates just an empty STS,which you must load with the necessary SQL statements later.

Loading the SQL Tuning Set

Once you create your STS,you must load the SQL statements into it by using the load_sqlset procedure of the dbms_sqltune package, as shown here:


Now that you have captured a representative SQL workload from the production system,it’s time to export this STS to the test system where you’ll conduct the performance analysis.

Transporting the SQL Tuning Set

You must first create a staging table using the create_stgtab sqlset procedure before you can export the STS you created in the previous step.This staging table is where you export the STS from the production data base and subsequently import it from there to the test database.Here are the steps in transporting the STS you captured from the production system:

SQL>exec dbms_sqltune.create_stgtb_sqlset(table_name=>'stagetab');

The previous code creates a new staging table named stagetab.Once you
create your staging table,it’s time to export the STS into that table by using the pack_stgtab_sqlset procedure:

After the export of the STS from the production system is complete,you& rsquo;re ready to import the STS into the test system

Import the STS into the Test System

After you use the Data Pump Export utility to export the staging table stagetab (which contains the STS),use the Data Pump Import utility to load the staging table into your test system.After you import the staging table,you must run the unpack_stg tab_sqlset procedure in order to import the STS into the replay database:

Once you successfully import the STS into the replay data base,your next step is to create a SQL Performance Analyzer task,as explained in the next section.

Creating a SQL Performance Analyzer Task

To compare the execution of the captured production workload on the pre- and post-upgrade test environments,you must first create a SQL Performance Analyzer task,using the DBMS_SQLPA package.Use the create_analysis_taskprocedure to create a tuning task for the STS you created earlier:

Since you’re specifying an STS as the source of the SQL work load, make sure you have already created and loaded the STS before creating the SPA task as shown here.

Analyzing the Pre-change SQL Workload

To get the best results from using the SQL Performance Analyzer,your test system must resemble the production system as closely as possible. After installing the Oracle Data base 11 release soft ware,you need to set the optimizer_features_enable parameter to match the production system version on which you captured the SQL STS,as shown here:

optimizer_features_enable=10.2.0

By setting the value of the optimizer_features_enable para meter to 10.2.0, you can generate SQL performance statistics for a 10.2 data base you’re about to upgrade to the 11g version.Now you are all set to capture the pre-upgrade SQL performance data.Use the execute_analysis_task procedure to analyze the SQL workload,as shown here.

The key parameter here is the execution_type parameter,which can take the values explain plan,which generates explain plans for the SQL statements in the SQL workload,and test execute, which actually executes all SQL statements in the workload.In our example,we specify test execute as the value for thee xecution_ type parameter,because we want to execute all the SQL statements in our STS.Note that only DML queries are executed so as to prevent an adverse impact on the data.

Analyzing the Post-upgrade SQL Workload

To compare the effect of a system change,you must first make that change on the test system.In this example,we’re trying to figure out the impact of a database upgrade from the 10.2 release to the 11.1 release.To test the impact of the database upgrade, you must first set the value of the initiali zation parameter optimizer_features_enable to the 11g version:

optimizer_features_enable=11.1

Once you do this,you can collect the SQL performance data post-up grade by running the same execute_analysis_task procedure as you did before you made the change:

The final step is to compare the SQL performance between the two runs of the execute_analysis_task procedure.

Comparing the SQL Performance

You must execute the execute_analysis_task for a third and final time in order to compare the SQL performance before and after the database upgrade.You must under stand how to set the values for the execute_type parameters for this third and final execution of the execute_tuning_task procedure.Since two executions of execute_analysis_task,both of the execution type test execute, already exist,you must now pass the value compare performance to the execution_type parameter so the procedure can analyze and compare the SQL performance data from the two workload analyses you conducted.

Here’s how you invoke the execute_analysis_task procedure to compare SQL performance before and after the system change:

In this example,we used disk_reads as the value for the comparision_metric parameter,which lets you specify a variety of statistics to measure the per formance impact of the system changes you make.Other possible values for this parameter include elapsed_time,optimizer_cost, direct_write, parse_time, buffer_gets,and so on.

Generating the SQL Performance Analyzer Report

Once you finish running the comparison analysis, it’s time to generate a report showing the results.You can generate the results of the SQL performance comparison in report form by executing report_analysis_task,as shown here:

In this example,we chose to generate a text report (HTML and XML are the other options) and summary as the value for the section parameter (all is the other option).

Analyzing the Performance Report

The SQL Performance Analyzer report in this example uses the disk_reads comparison metric to evaluate the data base change you made(different optimizer level).

The report contains three main sections:general information, resultsummary,and result details.The result summary section shows overallperformance statistics pertaining to the entire SQL work load as a whole, indicating whether performance will improve or degrade after the data base changes you’re putting in place.

The report provides detailed execution statistics for each of the SQLstatements in the STS you provide and summarizes the findings by saying whether the performance of a statement has regressed and whether its SQL execution plan has changed.Wherever the report indicates a performance regression,it also will contain a root cause analysis and recommendations to improve the execution plan of that SQL state ment.You can then tune the regressed state ments using the recommendations.

Analyzing the Performance Report

The SQL Performance Analyzer offers benefits such as a low overhead for data capture and integration with the cost optimizer when compared to third-party tools that perform similar tasks.

In addition,the SQL Performance Analyzer offers the additional advantage that it’s integrated with other Oracle tools such as the SQL Tuning Advisor and the SQL Plan Management feature.You can use both the SQL Tuning Advisor and the new SQL Plan Management feature to carry out the SQL Performance Analyzer recommendations made after the system change.

You can place the new execution plans generated as a result of the system change you implemented in the SQL plan baseline repository.Thus,you can ensure that the optimizer will pick only previously validated execution plans. Following this, the database can automatically validate the plans seeded by you in the SQL plan baseline,or you can manually validate them your self.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Oracle 11g Topics