Database Analysis SAP BASIS

The database is a central resource in any SAP system and for that reason it deserves special attention when it comes to performance and stability, since all users may be affected, if the database is not performing well.
SAP systems may use an Oracle, SQL Server, DB2 UDB, SAP DB or Informix database. Depending on this factor, the monitors and the indicators to look at when analyzing the database are different. With the exception of the individual indicators and characteristics of every database, analyzing a database follows the same general rules.

You may check for hardware bottlenecks in CPU and memory in the database server, check for Input/Output contention in the hard disks, ensure that parameter settings are configured for optimal performance and stability and search for expensive SQL statements using the monitors in your SAP system. As an example, we will use a Database Monitor for an Oracle database in this chapter.

The Database Monitor (Oracle)

Execute transaction code ST04 or follow the path Tools | Administration | Monitor Performance | Database | Activity. Figure shows a view of the Database Monitor for Oracle. We will proceed to describe briefly the most important indicators of performance for the database next.

Displaying important indicators in the Database Monitor (Oracle)

Displaying important indicators in the Database Monitor (Oracle)

In an Oracle database it is important to pay attention to the following indicators:

  • The data buffer, which should approach a value higher than 95% to indicate that when a call is passed to the database, most of the time the data are found in the buffer, instead of having to go to physical disk.
  • The Shared Pool SQL and Data Dictionary Pin ratio, which should approach 100%, as it indicates that most of the time the database engine is able to reuse to stored SQL statements from the Shared Global Area.
  • The number of Reads/User Calls should be less than 20; otherwise, it may be a indicator of expensive SQL statements in the database. More information on detecting expensive SQL Statements will be found later on in this chapter.
  • The Time/Users Call should be less than 20 milliseconds in order to achieve optimal performance.

Troubleshooting the Database: Expensive SQL Statements

When analyzing the response time of your system, you may encounter that the database request time in the Workload Analysis Monitor is excessive or that certain programs in the Work Process Monitor are taking long times performing actions in the database ("Sequential Read," etc.). This might be the time to look at the Database Performance Monitor and check for possible performance problems. Sometimes, an analysis in a particular transaction or program in the SQL Trace may lead you to perform a deeper analysis of SQL statements as well.

Note that many areas can affect negatively the database performance, such as the network communication with the application servers, the hardware capacity of the database server, wrong parameter settings, and so on. Therefore, in order to make this analysis the easiest, we will concentrate our efforts in checking the database for expensive SQL statements and try to give you some tips and tricks to detect possible performance problems related to expensive SQL statements and a few quick advises for tuning.

Expensive SQL statements may come from a variety of situations, but most of the time, it is the application layer, in the case of SAP systems, ABAP programming, that is the root cause. It is important to detect expensive SQL statements before tables grow so large that they are difficult to manage and have your developers tune the custom code as much as possible in order to obtain optimal performance. From the database performance monitor, click on Detail Analysis Menu and then on SQL Request. The pop-up selection screen that shows up allows you to filter and sort the output. The list displayed is the selected SQL statements in the Shared Pool of Oracle, accessible from the SAP Web Application Server.

Analyzing SQL statements in the Database Performance Monitor for an Oracle database

Analyzing SQL statements in the Database Performance Monitor for an Oracle database

An easy way to analyze whether any of these statements is expensive and therefore causing overhead in the database and a rule of thumb to follow would be as follows: Sort the list by Buffer Gets, which is basically the number of blocks read from the database buffer. When this number exceeds 5% of the total number of reads from the database (Reads in the main screen of the database performance monitor), it is an indicator that such statement may be expensive, because it reads many blocks. Look at the number in the Buffer Gets row, which is how many blocks must have been read from the buffer per each record to satisfy the request.

When this is greater than 5, this may mean that (of course, depending on other factors, such as parameter settings—let's assume they are set optimally) the statement is reading "too much" and therefore, the access path chosen by the database engine is not suitable. When this figure is small, it is likely that the access path chosen by the engine is suitable and it is just that the application requested (with the ABAP program that passed that statement to the database) a lot of data.

In order to see the access path chosen by the database engine, select Goto | Explain SQL or click on the wrenchlike icon after highlighting the selected SQL statement. The Cost Based Optimizer is the way that the Oracle engine determines the best access path to data (using a secondary index, reading a full table, etc.) and the cost and estimated number of rows read will be displayed as well, which should be as low as possible. When an index is used to retrieve the data, it is suitable when all or most of the fields from the WHERE clause of the statement are also included in the index.

You can check important statistical data from that table and what indexes are created in that table by clicking on the table name in the Execution Plan showing the access path. The statistics date is important, because all tables in a SAP system, with exception of the some of them with special processing (please, check SAP Note 122718 and others), should have recent statistics. The Cost Based Optimizer engine needs to have current statistics in order to determine the most suitable access path.

The indexes in a table are also important to determine a good access path. Primary indexes have the name of the table followed by the tilde (~) and the number 0. Secondary indexes may be followed by a different number or a letter. Check whether the fields on the indexes that are created in that table are selective or not by looking at how many "distinct" values the field have and if they are included in the WHERE clause of the statement passed to the database.

Another tip for optimal performance when using secondary indexes would be to avoid more than five indexes in a table and more than five fields in an index, because too many indexes may also be "confusing" for the database engine, and all of them need also to have recent statistics and be updated when the table has changed, causing overhead. You can display the call point of the program that has passed that statement to the database by clicking on the icon with the glasses or via Goto | Display Call Point in ABAP Program. This way you save a lot of time and you can find out where exactly in which program that statement comes from. When you determine that the ABAP program is causing a poor selection and access path in the database, and this is a custom code, maybe it would be convenient, if possible, to change the code accordingly to obtain better performance accessing the data that you need.

Another criterion that defines an expensive SQL statement is the number of Disk Reads. When you observe this figure in a SQL statement and it exceeds 2% of the number of Reads, it may also be expensive, due to excessive reads in the data file and therefore excessive Input/Output. Finally, it is also important to distinguish between the statements that are passed to the database by the different applications running in the database server. Those statements in upper case with the objects (tables and fields) that are accessed between double quotes belong to statements passed from ABAP programs. Statements in lower case without quotes may come from an external source, like a SQL script or a third-party monitoring tool, and so on.

Additional Troubleshooting in the Database

Let us provide you with a few more tricks to analyzer the performance and stability of your database. It is best practice to verify that you are following SAP's recommendations in parameter settings for your database. In case of an Oracle database, SAP Note 124361 contains the parameter settings recommended for optimal performance to be set in the database profile initSID.ora, where SID is your system ID. Ensure that there are no missing indexes in your database that are used or could be used in accessing tables. Execute transaction DB02 and click on Missing Indexes to verify. Make sure that locks are not the root cause of high database response times. To monitor lock wait situations at the database level from a SAP system, check transaction DB01 or follow the menu path Tools | Administration | Monitor | Performance | Database | Eixclusive Lock Waits.

The list displayed shows the process that would be holding a lock in an object and the processes that would be waiting for that lock to be released. Locks are necessary for data consistency. However, programming efficiently and avoiding locks in loops is a best practice to avoid exclusive lock wait situations. Check the data files' distribution for contention. From the database performance monitor ST04, go to Detail Analysis Menu and then File System Requests to check the time spent accessing blocks in each of your data files, which, depending on several factors, such as if you are using a cached storage subsystem or not, should not exceed 15 to 45 milliseconds optimally. Check the recommendations from the Installation Guides for advise in distributing the data files for performance and data security to avoid contention.

Also, check the order of the redolog files in an Oracle database by observing the V$ tables. These tables contain much historical and statistical data to analyze the database in a very deep detail. From the database performance monitor, choose Detail Analysis Menu and then Display V$ Values. Double-click on V$LOG and V$LOGFILE and check the order of the redologs and the sequence in which they are used to write log changes. If the sequence order in which they are written to is such that the same directory and physical disk is hit twice or more in a row, then contention may be affecting negatively the database performance. Finally, do not forget to check the database alert log file under Database Message Log to check for possible errors that affect performance and stability in the database. For example, you may be able to observe messages like "Checkpoint not complete," which are caused by excessive load to the database and too small or too little number of redolog files.

There are other logs and traces available in the system that are output of the program BRCONNECT. This substitutes SAPDBA as database operation and monitoring tool at the operating system level. It is best practice to schedule BRCONNECT—check on a very regular basis (daily) to perform a complete check at the database level and display the output to check for any situation affecting negatively your database.



Face Book Twitter Google Plus Instagram Youtube Linkedin Myspace Pinterest Soundcloud Wikipedia

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

SAP BASIS Topics