How queries perform directly affects the system's users. In this section,we will discuss five query-related techniques:
We do not have enough space to discuss techniques that involve the configuration of hardware and operating system. Refer to the documentation from SAP and hardware vendors to find out more about these strategies.
Query Read Mode
Each query has one of the following three read modes (see Screen):
In this read mode,when we run the query,the fact table data needed for all possible navigational steps are read into the main memory area of the OLAP processor. Therefore,subsequent navigational steps do not require any additional database reads.
This read mode takes a long time to present the initial query result.
In this read mode,when we run the query,the fact table data needed only for the current navigational step are read into the main memory area of the OLAP processor. As a consequence,later navigation steps may need additional database reads. The data for identical navigational states are buffered in the OLAP processor.
If the query involves a hierarchy,the data for the entire hierarchy are read into the main memory area of the OLAP processor.
This read mode is recommended for queries with many free characteristics.
In this read mode,when we run the query,if a hierarchy is involved,the data needed only for the current node,such as the EAST region in Screen,are read into the main memory area of the OLAP processor. As a consequence,additional database reads are needed when expanding a lower-level node.
This read mode is very helpful for a query involving large hierarchies with many nodes. Aggregates should be created at a middle level of the hierarchies,and the start level of the query should be smaller than or the same as this aggregate level.
The procedure used to set the read mode for a query follows.
Step 1. Run transaction RSRT,select a query from the lookup,and then click.
Step 2. Select a read mode for this query,and then click to save the setting.
A read mode has been set for the query.
Note:To set a read mode for all queries,select All queries read mode from the Environment menu item from Screen.To set a default read mode for future queries,run transaction RDMD.
Bitmap indices can dramatically improve query performance when table columns contain few distinct values. The ratio of the number of distinct values to the number of total rows in a table is called its cardinality. A column is a good candidate for a bitmap index when its cardinality is less than 0.01. Otherwise,you should consider using a B-tree index. For this reason,line item dimensions use B-tree indices,whereas other dimensions use bitmap indices.
Table gives an example of a bitmap index—in this case,for the IO_SREG column in Table. It consists of four distinct bitmaps: 1000 for EAST,0100 for MIDWEST,0010 for WEST,and 0001 for blank.
BITMAP INDEX FOR IO_SREG COLUMN IN TABLE
When we run a query to display the sales revenue in the EAST and MIDWEST regions,the database will select and summarize sales revenue of all rows that contain the value 1 in the Result column of Table.
BITMAP BOOLEAN OPERATION
From this example,we make the following observations:
The procedure to check bitmap indices in BW follows.
Step 1. Run transaction SE11,display the /BIC/IC_NEWBC2 table definition,and then click.
Step 2. The indices for this table are listed. Select the first one,and then click.
Information about this index appears in Screen.
To check the index type,display the contents of the table DDSTORAGE. In Screen,we see the above index is a bitmap index.
Note:The same information can be obtained from Screen under the folder indexes.In BW,we cannot change the index type.
Statistics for the Cost-Based Optimizer:
The cost-based optimizer decides the most appropriate query execution plan based on available statistical information. For this reason,it is very important to have up-to-date statistical information.
In Section,"Indices and Statistics," we discussed ways to automate the process so as to refresh the statistical information for each new data load. In Screen in that section,at the bottom of the DB statistics block,we need to specify a percentage that indicates how much of the InfoCube data will be used to calculate the statistics. The default value is 10.
In addition to this method,BW provides a program called SAP_ANALYZE_ALL_INFOCUBES that gathers statistical information for all tables related to InfoCubes,including master data and aggregates. When running this program,we need to give a percentage,which specifies how much of the InfoCube data will be used to calculate the statistics.
Both methods use an ABAP statement like the following:
With this statement,both methods will produce the same statistical information. If the input percentage is less or equal to 20,BW will use 10 percent of the InfoCube data to estimate the statistics. Otherwise,BW will compute the exact statistics.
The biggest drawback of using ANALYZE in both methods is that the statistics will be calculated sequentially. In such a case,the Oracle PL/SQL package DBMS_STATS represents a better choice. Whenever possible,DBMS_STATS calls a parallel query to gather statistics; otherwise,it calls a sequential query or uses the ANALYZE statement. Index statistics are not gathered in parallel. Refer to the Oracle document "Designing and Tuning for Performance" to obtain more information on database statistics.
The following procedure checks the status of a table's statistical information.
Step 1. Run transaction DB20,enter the table name,and then click.
Screen displays the status.
A partition is a piece of physical storage for database tables and indices. If the needed data reside in one or a few partitions,then only those partitions will be selected and examined for a SQL statement,thereby significantly reducing I/O volume. This benefit,which substantially improves query performance,is called partition pruning. For a better result,it is recommended that you spread each partition over several disk devices.
Next,let's look at a partition example in BW. We already know how to check the table contents by running transaction SE11. SE11 also tells us how the table contents are stored in the database. The relevant procedure follows.
Step 1. Screen shows the first SE11 screen for the /BIC/FIC_NEWBC2 table.
Step 2. Select Database utility from the Utilities menu item.
Step 3. Click.
Screen indicates that the contents of /BIC/FIC_NEWBC2 are stored in three partitions. The argument used in partitioning the table is column KEY_IC_NEWBC2P. Because the first partition has a HIGH VALUE 0,only the second and third partitions hold data,and each contains the data from one load request.
With E fact tables,we can partition each table by time. The procedure follows.
Step 1. After compressing InfoCube IC_NEWBC2,we open its definition.
Step 2. Select Partitioning from the Extras menu item.
Step 3. Partition the table by calendar month,and then click to continue.
Step 4. Enter a time range in which the data will be partitioned by month,and then click to execute the partition.
The E fact table has been partitioned by month.
Partitioning not only improves query performance,but also enhances load performance. For this reason,BW allows us to partition PSA tables using a special transaction RSCUSTV6. After running this transaction,we obtain Screen,in which we specify the number of records after which a new partition will be created. By default,this value is 1,000,000 records.
Parallel Query Option (PQO):
Parallel processing can dramatically improve query performance when multiple processes work together simultaneously to execute a single SQL statement. The number of parallel processes assigned to the execution is called the degree of parallelism (DOP). The Oracle database determines DOP by following three steps:
Note:DOP cannot be specified within BW yet. You must use database tools to perform this task.
The Oracle initialization parameter PARALLEL_MIN_SERVERS specifies the number of parallel processes that an instance will have after start-up. A parallel query can obtain additional processes as specified,or they can be set by default. Whichever method is used,the total cannot exceed the value of PARALLEL_MAX_SERVERS. Table lists other initialization parameters and their BW and R/3 default values.
PQO-RELATED INITIALIZATION PARAMETERS
PQO is a database feature. Increasing the CPU power in the application server machines in the SAP Basis 3-tier architecture will not improve database parallel processing,although it may improve OLAP performance.
Do not use PQO on a single-CPU machine.
A server that has already exhausted its CPU or I/O resources will not benefit from PQO.
To achieve optimal performance,all parallel processes should have equal workloads. Skewing occurs when some parallel processes perform significantly more work than the other processes.
PQO works best on partitions that are spread or striped across many disk devices.
SAP BW Related Interview Questions
|SAP BI Interview Questions||SAP WM Interview Questions|
|SAP BO Interview Questions||SAP Netweaver Interview Questions|
|SAP BODS Interview Questions||SAP Webi Interview Questions|
|SAP Web Dynpro Interview Questions||SAP BDC Interview Questions|
|SAP BW on HANA Interview Questions|
Sap Bw Tutorial
Business Scenario And Sap Bw
Creating An Infocube
Loading Data Into The Infocube
Checking Data Quality
Creating Queries And Workbooks
Managing User Authorization
Aggregates And Multi-cubes
Operational Data Store (ods)
Generic R/3 Data Extraction
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.