5 avg. rating (100% score) - 5879 votes
Searching for a SAP MaxDB job? If you are an expert in SAP then this is for you. Do not worry, we’ve a right answer for your job interview preparation. If you are preparing for SAP EWM job interview, we will help you in clearing the interview through Wisdomjobs interview questions and answers page. SAP MaxDB is an ANSI compliant Database system. SAP AG maintains the MaxDB. It is developed by SAP SE and written in C++. It is similar to database with regular structure. Data is represented from tables and by its meta data. Good knowledge on SAP MaxDB is required for this job. Below are the SAP MaxDB interview questions and answers which makes you comfortable to face the interviews:
One can configure a maximum of 255 data volumes in the SAP MaxDB standard layout (parameter: VOLUMENO_BIT_COUNT or ConverterVolumeIDLayout= 8). The maximum size of a data volume can be 128 GB. However the maximum total size of the data volumes can be 32 TB. The log area can also use a maximum of 32 TB.
No, there is no limit for the number of simultaneous sessions of an SAP MaxDB database. The database parameter MaxUserTasks is used to configure the number of database sessions that can be logged on simultaneously to the database.
The number of database users in OLTP systems should be configured to at least 2 x <number_SAP processes > + 4.
The number of database users in OLTP systems should be configured to at least 3 x <number_SAP processes > + 4.
The maximum number of connections to the database in the connection pool is determined for each J2EE instance (NW 7.1 is the default 70). The sum of connections (connection pool) of all J2EE instances + 4 is used to calculate the number of parallel user sessions (MaxDB parameter: MaxUserTasks).
The formula which is used to calculate the value for the database parameter MaxUserTasks for liveCaches in SCM system 4.1 and lower is:
2 x <number_SAP processes > + 4
The formula which is applied for liveCaches in SCM system 5.0 and above is:
3 x <number_SAP processes > + 4
One can also refer to 757914.
The MaxDB administrator is responsible for monitoring the active database and also for the following activities:
The optimum use of the I/O system is critical for I/O performance. Thus, it becomes beneficial to distribute the volumes evenly across the available I/O channels.
The parallelism of the I/O gets affected by the number of data volumes.
The asynchronous I/O of the operating system is used on Windows.
On UNIX the number of configured I/O threads determines the parallelism with which the SAP MaxDB/liveCache database transfers the I/O requests to the operating system.
o SAP MaxDB version lower than Version 7.7
The number of volumes * number of I/O threads for each volume (_IOPROCS_PER_DEV), gives the number of I/O threads.
o SAP MaxDB Version 7.7 or higher
Volumes * (total of low/med/high queues for each volume), gives the number of I/O threads. But it can get limited by the database parameter MaxIOPoolWorkers.
The number of threads gets increased by a number of threads that was configured too high. This results in reaching the limits of the operating system resources.
It is recommended to use the following formula to determine the size of SAP MaxDB data volumes: 'square root of the system size in GB, rounded up'.
10 GB: 4 data volumes
50 GB: 8 data volumes
100 GB: 10 data volumes
200 GB: 15 data volumes
500 GB: 23 data volumes
1 TB: 32 data volumes
It is preferred to have the same size for all the data volumes.
No, it is not recommended to change the existing configurations. In case of the occurrence of any serious I/O performance issues, one should analyze the problems in detail to determine their actual cause.
A maximum of 255 data volumes can be configured in the SAP MaxDB standard layout.
The volumes of the type "File" and of the type "Raw" can be defined on UNIX.
A raw device is a hard disk or its any part which is not managed by the operating system. Data volumes of the type "raw device" can be configured for databases on UNIX.
Since the administrative effort which is required for file systems does not apply and so the access to raw devices is generally faster.
As the operating system does not have to check the consistency of the file system on raw devices, so it can usually boot faster.
Because of the above mentioned advantages, it is recommended to use raw devices on UNIX platforms. However, volumes of the type "File" on UNIX are also supported.
Volumes of the type "File" are the recommended standard in LINUX.
The performance of the database is greatly influenced by the speed with which the database system can read data from the data volumes and can write data to the volumes. For ensuring good performance while operating the database later, one should see 993848 (Direct I/O mount options for LiveCache/MaxDB) for information about creating and configuring volumes of the type "File".
It is recommended to distribute the data volumes across several LUNs. As per the various experiences, approximately 5 LUNs can be configured for each LUN.
This mechanism can be activated using parameter EnableDataVolumeBalancing, as of MaxDB Version 7.7.06.09.
When the parameter EnableDataVolumeBalancing (deviating from the default) is set to value YES, all the data gets implicitly distributed evenly to all data volumes once a new data volume is either added or deleted.
An even distribution of data is triggered during the restart.
It is recommended to use the number of cores as a basis for calculating MAXCPU (as of Version 7.7., this is MaxCPUs), because the dual core CPUs have two cores with separate execution units (a separate L1 cache and sometimes even a separate L2 cache).
One can also see FAQ 936058: MaxDB Runtime Environment, for information about setting the database parameter MAXCPU (MaxCPUs).
MaxDB Version 7.8 provides the option to use the parameter UseableCPUs to dynamically add additional CPUs to the database or to reduce the number of using CPUs. The parameter MaxCPUs continues to control the maximum number of CPUs to be used.
The database parameter CACHE_SIZE or (as of Version 7.7.03) the database parameter CacheMemorySize should be used to configure the IO buffer cache.
The converter cache and the data cache of an SAP MaxDB database are included in the IO buffer cache.
The database performance is greatly influenced by the size of the IO buffer cache. The larger the dimensions of the IO buffer cache, the fewer time-consuming hard disk accesses have to be executed.
The data volume that is to be processed in day-to-day business and the application determine the size of the IO buffer cache that is to be set.
It is generally recommended to process all the data in a system that is up and running, without accessing the hard disk which is in the data cache. However it is often not possible in the BI environment and OLTP environment.
It should be possible to hold all data that is to be processed in the IO buffer cache, while using the SAP liveCache technology. Generally, the results of the Quick Sizer should be used to configure the IO buffer cache for SAP liveCache technology.
The following applies to the IO buffer cache: the larger, the better (provided that sufficient physical memory is available).
It should be noted that a heap memory is also allocated by the database in addition to the IO buffer cache. The overall memory consumption of an SAP MaxDB database can be determined using the information from the system table MEMORYALLOKATORSTATISTICS.
The number of successful and failed accesses to the data cache determines the data cache hit ratio. The hit ratio indicates whether the size of the data cache is well configured. However, if exceptional applications are running at the time of the analysis then the data cache hit ratio does not provide sufficient information. For example, during year-end closing the hit ratio may deteriorate because this data does not have to be held in the cache permanently. After directly restarting the database, it is not indicated by the data cache hit ratio that whether the system is well configured, because all data must first be loaded into the cache.
The settings for the size of the IO buffer cache which has been tried and tested by many OLTP customers and BW customers for SAP systems are as follows:
You can create an index (also known as secondary key) to speed up the search for database records in a table. An index is a database object that can be defined for a single column or a sequence of columns of a database table.
In technical terms, indexes are data structures (consisting of one or more inverting lists), which store parts of the data of a table in a separate B* tree structure. This storage sorts the data according to the inverting key fields that were used. Due to this type of storage, the table data can be accessed faster using the indexed columns than without the relevant index.
Indexes, unlike tables, do not include any independent business data, and therefore can always be created again by the table. For example, this is relevant if corruption occurs on the index due to hardware problems.
Indexes, like tables, are implemented as B* trees in SAP MaxDB. These consist of a root node that forms a tree with the subsequent nodes.
The complete index key and the references to the table data are in the lowest level of the index tree, otherwise known as the leaf level. The database system does not use physical items to identify these references, but instead these are saved in the primary key of the data records. The data record is identified by the primary key (the physical item of this data record is determined via the converter).
Since access to the data does not follow the sequence Primary key -> Data, but rather Data -> Primary key, it is also known as an inversion.
The idea behind this is that the relational table design provides for the fact that all data is dependent on a unique primary key.
While the access Primary key -> Data always collects one or no rows, the access Data -> Primary key collects no, one or multiple rows.
Each database table has a primary key (primary index). The primary key is either defined by the user or generated by the system. A user-defined primary key can consist of one or more columns. The primary key must have a unique value for each row of the table.
The primary key is implemented directly on the data tree, which means there is no separate primary key tree. There is no ROWID or anything similar. The unique identification of a record is the primary key (for multiple keys, this is the combination of fields that are defined as the primary key).
If the SQL optimizer evaluates access using the index as the best solution, the primary keys of the table that match the index key fields used are determined in the table tree.
The corresponding rows are read from the table using this list of primary keys.
Indexes are additional data structures that must be maintained with each change to the table data. Therefore, the effort involved in a data change (INSERT, UPDATE, DELETE) in the database increases with the number of indexes in a table. You should therefore ensure that the indexes you create in the customer namespace are actually used by the application also.
You will find a description of the strategies used by the SQL optimizer under 'Strategy' in the glossary of the SAP MaxDB documentation.
No. SAP MaxDB does not have an index fragmentation problem like that of Oracle (Note 771929). Here, indexes are kept optimal permanently, and storage space is immediately allocated to the freespace again.
As of Version 7. 8, individual indexes can be checked for consistency. For more information about the statement CHECK INDEX.
Yes. This is possible, for example, if the index is created on a little used field (that is, a field filled with too little data), but the primary key requires too much memory.
The key information is stored in a reduced form at index level in the B* tree of the table, while the secondary index must store each complete primary key.
No. View tables are views on tables. The tables that are involved in view tables are called base tables.
These views on base tables are implemented as SELECT statements on the base tables. Technically, views tables can be compared to saved SELECT statements.
Therefore, no indexes can be created on view tables, but they can be created on the base tables of view tables.
If a UNIQUE index is set to BAD, the corresponding table is set to READ ONLY. This lock is necessary since the UNIQUE option of the index can no longer be ensured for each additional write operation that concerns this index in the table. At the latest when the database is started the next time, UNIQUE indexes that are set to BAD are automatically recreated.
As of Version 7. 8, the DBM command auto_recreate_bad_index is available. You can use this command to activate the automatic recreation of indexes (including UNIQUE indexes).
The data for the index is read in parallel by several server tasks to perform the index build as quickly as possible. Only one parallel index build can be carried out at a time - if several CREATE INDEX statements are executed at the same time, these other indexes are then processed by only one server task. This is noticeably slower than a parallel index build. Therefore, you should always ensure that indexes are only created successively.
If you cancel a CREATE INDEX statement, the cancel indicator is set for the user task. However, before the task can be re-released, it must clear all index lists that were created before by the server tasks for the index build.
This ensures that all of the structures that are no longer required are deleted from the system after you cancel a CREATE INDEX statement. This may take some time depending on the number and size of the index lists.
You can create several indexes simultaneously. However, since only one index build can be carried out in parallel (by several server tasks), we recommend (to speed up the creation of indexes) that you ensure that indexes on large tables are started only if no other CREATE INDEX is active, when you are creating several indexes simultaneously. You can create indexes on small tables, even if a CREATE INDEX statement is already active.
Yes. Up to and including SAP MaxDB Version 7.6, a lock is set on the relevant tables during the index creation.
As of SAP MaxDB Version 7. 7, the system sets a lock for the entire duration of the index creation only if the following conditions apply:
You can speed up the index build on large tables by ensuring that only one CREATE INDEX statement is active, and therefore several server tasks are carrying out the index build.
The data cache should be configured sufficiently large, so that preferably all data for the index build can be loaded in the cache.
No. You are not required to explicitly create the statistics for the indexes by carrying out an UPDATE STATISTICS.
Hints can be applied to single-table views. However, apart from a few exceptions (such as ORDERED), you cannot apply hints to join views.
Three types of locks exist in SAP MaxDB:
Record locks (ROW):
Individual lines of a table are locked.
Table locks (TAB):
The entire table is locked.
Catalog locks (SYS):
Database catalog entries are locked.
Locks can be requested implicitly by the database system or explicitly by you (using the relevant SQL statements).
a) Requesting locks implicitly:
All modifying SQL statements (for example, INSERT, UPDATE, DELETE) always request an exclusive lock.
You can select the lock operation mode by specifying an isolation level when you open the database session.
Depending on the specified isolation level, locks are then implicitly requested by the database system when the SQL statements are processed.
b) Requesting locks explicitly:
You can assign locks explicitly to a transaction using the LOCK statement, and you can lock individual table lines by specifying a LOCK option in an SQL statement. This procedure is possible at each isolation level. In addition, you can use the LOCK option to temporarily change the isolation level of an SQL statement.
Shared locks allow other transactions to perform read accesses but not to perform write accesses to the locked object. Other transactions can set further shared locks on this object, but no exclusive locks.
Shared locks can be set on individual table lines or on the entire table.
If an exclusive lock on a database object is assigned to a transaction, other transactions cannot access this object.
Transactions that want to check whether exclusive locks exist or to set exclusive or shared locks collide with the existing exclusive lock of the other transaction. They cannot access the locked object.
Exclusive locks can be set on individual table lines or on the entire table.
The isolation level defines whether and in which way locks are implicitly requested or d.
The selected isolation level affects the degree of parallelism of concurrent transactions and the data consistency:
The lower the value of the isolation level, the higher the degree of parallelism and the lower the degree of the guaranteed consistency.
If there are concurrent accesses to the same dataset, various inconsistencies (such as dirty read, non-repeatable read, and phantom) may occur at different isolation levels.
For SAP applications (ABAP stack), the isolation level is defined in the XUSER file. The ABAP applications run at isolation level 0.
In NetWeaver (Java stack), data sources are used for the application logon. These data sources can be configured using the NetWeaver. You can freely choose the isolation level. If the data source does not specify an explicit isolation level, default isolation level 1 (read committed) is used.
Isolation level 0:
In isolation level 0 (uncommitted), lines are read without an implicit request of shared locks (dirty read). This does not ensure that when a line is read again within a transaction, it has the same state as when it was read the first time because it may have been changed by a concurrent transaction in the meantime. This does not ensure that read records are actually committed in the database.
When you insert, change, and delete lines, the data records are exclusively locked. The locks are retained until the transaction ends to ensure that no concurrent changes can be made.
The SAP system is operated at isolation level 0 because a separate lock mechanism that is at a higher level than the lock mechanism of the database is implemented for the SAP application.
Isolation level 1:
If an SQL statement is used for a data retrieval, it is ensured for each read line that (when a line is read) no other transaction holds an exclusive lock for this line. The shared lock is removed after the record was read. If lock collisions occur, a lock (REQ ROW SHARE) is requested and the transaction must wait until this lock is assigned to it before it can access the record.
Isolation level 15:
Isolation level 15 ensures that the resulting set is not changed while it is being processed. Reading backwards and positioning in the resulting set creates unique results. When you insert, change, or delete lines, exclusive locks for the relevant lines are implicitly assigned to the transaction; these exclusive locks are d only when the transaction ends.
Isolation level 2:
Isolation level 2 protects against the non-repeatable read. A record that is read several times within a transaction obtains always the same values.
If isolation level 2 or 20 (repeatable) is specified, before processing starts, shared locks are implicitly requested for all the tables that are addressed by a SQL statement for the data retrieval. When you insert, change, or delete lines, exclusive locks for the relevant lines are implicitly assigned to the transaction; these exclusive locks are d only when the transaction ends.
Isolation level 3:
If isolation level 3 or 30 (serializable) is specified, a table shared lock is implicitly assigned to the transaction for each table that is addressed by an SQL statement. These shared locks can be d only when the transaction ends. Isolation level 3 protects against three abnormal access types (dirty read, non-repeatable read, and phantom), but is not suitable for an operation with a high degree of parallelism. When you insert, change, or delete lines, exclusive locks for the relevant lines are implicitly assigned to the transaction; these exclusive locks are d only when the transaction ends.
New isolation level as of the usage of MVCC:
When you start using these new isolation levels, the isolation levels mentioned above are no longer available.
Isolation level 50:
Isolation level 50 corresponds to COMMITTED READ.
Isolation level 60:
Isolation level 60 corresponds to "Serializable" that is known from Oracle. This is a lesser requirement than ISO/ANSI "Serializable".
The old isolation levels that are lower than 3 are mapped to the new isolation level 50. The old isolation level 3 is mapped to the new isolation level 60.
Consistent reading has not been supported yet in the OLTP operation and OLAP operation. The lock management of SAP MaxDB will be changed in more current versions due to the MVCC implementation. Currently, we cannot specify in which SAP MaxDB version this function can be used productively.
The database parameters of SAP MaxDB influence the SAP MaxDB SQL lock management.
a) MaxUserTasks (MAXUSERTASKS):
The database parameter MaxUserTasks defines the maximum number of parallel user sessions.
b) MaxSQLLocks (MAXLOCKS):
The database parameter MaxSQLLocks defines the maximum number of line locks and table locks that can be held or requested at the same time. If the MaxSQLLocks value is reached, statements that request locks are rejected (-1000: Too many lock requests).
c) RequestTimeout (REQUEST_TIMEOUT):
The database parameter RequestTimeout defines how long a transaction must wait for the assignment of a lock. If this wait time has expired before the lock is assigned, the transaction terminates with an error.
d) DeadlockDetectionLevel (DEADLOCK_DETECTION):
The database parameter DeadlockDetectionLevel defines the maximum depth of search for the deadlock detection with SQL locks.
If this database parameter is set to value 0, the deadlock detection is deactivated (in other words, deadlocks are removed only by RequestTimeout). A value that is higher than 0 defines the depth of search. Up to the specified depth of search, deadlocks are detected and immediately removed. The initial value for the deadlock detection is 4. A higher value results in considerable costs. Consider using a higher value only if an application triggers serious deadlock problems that cannot be solved in the application.
A transaction terminates because too many SQL locks were requested. The configured maximum value (MaxSQLLocks) has been reached. The system can assign no further SQL locks.
If these problems frequently occur in the system, check the value of the database parameter MaxSQLLocks.
MaxSQLLocks values that are too high result in long search processes in the lock lists. Therefore, if this is possible, reduce the write transactions by using frequent commits.
You can increase MaxSQLLocks if one of the following situations frequently occur:
In isolation level 1, a lock is temporarily used for each record and is removed before the next record.
In isolation level 3, a table shared lock is used.
The SQL locks on joins are handled in the same way as locks on tables. There is no difference.
SAP recommends that you use the MaxDB tools DBMGUI/Database Studio, DBMCLI and SAP CCMS for the administration tasks.
You can use the CCMS scheduling calendar or the MaxDB tools to automate both administration tasks that need to run regularly (such as automatic log backups, Update Statistics) or that only need to be run when required (such as extending the data area).
In the MaxDB environment, you can use the transaction DB50: Database Assistant to monitor the database.
You can use transaction LC10: liveCache Assistant to monitor a liveCache. In SAP Releases as of Release 7.0, you access these transactions using transaction DB59.
As of SAP Release 7.0 the DBA Cockpit (transaction dbacockpit) is available as a central access to the database or liveCache administration.
The central CCMS scheduling calendar comprises transactions DB13 and DB13C.
You can use these transactions to schedule the following activities:
To perform a database backup, use the MaxDB backup tools DBMGUI or DBMCLI. You can use the scheduling calendar (DB13 or DB13C) to schedule a backup for a particular point in time and execute it implicitly.
It is possible to execute file system backups of MaxDB volumes when the database is offline only, though we recommend this only in exceptional circumstances (such as if you use snapshots or split-mirrors).
No. A database backup using the MaxDB tools only backups up the data that is stored in the data volumes. This includes the actual use data and the "before images", so that it is possible to restore a consistent database using only the database backup.
The data that is saved in the log volume is not backed up during a data backup.
It is only possible to execute a file system backup when the database is in the offline status. The length of time for which the database is unavailable for operation depends on the size of the dataset.
In contrast, you can back up the database while it is in the online status using the MaxDB backup tools, in parallel with production operation. You can use a backup generated online using database tools for a system copy.
A file system backup of the volumes (data and log) backs up all data that is saved in the volume.
In contrast, a backup using the MaxDB tools only backs up the data of a converter version, which may be required for a recovery.
When you create a file system backup, the system does not carry out a checksum-check on the blocks.
While the backup is being created using the MaxDB tools, the system executes consistency checks on the data.
Even when you back up the log volumes using a file system backup, you need to also back up the log area using the database tools, to ensure that the log area can be released again for overwriting.
A file system backup is not included in the MaxDB backup history.
Backups performed using the MaxDB tools integrate the backup history, and this makes restoring the database in the event of a recovery a simple process.
Yes. Max DB supports a number of backup tools from third parties, such as Networker, Netbackup from Veritas, and so on.
Which specific tools are supported, and how they can be included in a MaxDB environment is described in the MaxDB documentation (Note 767598). Search the glossary for the keyword "external backup tool".
Scripts in the database manager also enable you to use all other backup tools that are able to process backups from pipes.
When there is no more memory available in the database, the database hangs. You must make free memory available in the form of a new data volume. You can add new volumes while the database is online.
More information about this topic is available is in the MaxDB documentation (Note 767598) in the glossary, under the keyword "data volume" or "db_addvolume".
The general parameter MAXUSERTASKS defines the maximum number of user tasks that can be active at the same time, and therefore determines the maximum number of database sessions.
When this number is reached, no further users can log on to this database instance. You then need to increase the parameter. If you assign a very high value to MAXUSERTASKS, then the database system requires a lot of address room, especially in the case of local communication using shared memory.
The change only becomes active when you restart the database.
SAP MaxDB Related Tutorials
|MySQL Tutorial||SAP BW Tutorial|
|DB2 Using SQL Tutorial||SAP Crystal Reports Tutorial|
|SAP BPC Tutorial||SAP BW on HANA Tutorial|
SAP MaxDB Related Interview Questions
|MySQL Interview Questions||SAP BW Interview Questions|
|DB2 Using SQL Interview Questions||SAP Crystal Reports Interview Questions|
|Sybase Interview Questions||SAP BPC Interview Questions|
|MYSQL DBA Interview Questions||DB2 SQL Programming Interview Questions|
|SAP Workflow Interview Questions||SAP BW on HANA Interview Questions|
|SAP R/3 Interview Questions||SAP ABAP Data Dictionary Interview Questions|
|Sap Business One Interview Questions||Sap DMS Interview Questions|
|Sap Datahub Interview Questions|
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.