4 avg. rating (80% score) - 5879 votes
Are you trying to build your career with DWBI jobs? Wisdomjobs are the right stop to hold by answering all your job quests. DWBI is a methodology that can be applied to any domain and it has a scope in IT industry showing an upward trend. Wisdomjobs.com gives you all kind of assistance in preparing for a job and gaining further knowledge to excel your interview clearing ability. Visit our DWBI jobs interview questions and answers page that is designed by experts to understand different patterns and levels of interview Q&A arranged in a well-assorted manner. Register on our jobs portal to get all the information on the jobs you applied for and to build a better career.
Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows : While adding a record the’ front-end check back-end that addition of record is effected or not. So same for delete, update,…… Ex:Enter employee record in database thr’ front-end and check if the record is added or not to the back-end(manually).
Data driven test is used to test the multi numbers of data in a data-table, using this we can easy to replace the parameters in the same time from different locations.
e.g: using .xsl sheets.
It can be verified by querying the common audit log where we can able to see the triggers fired.
Before testing Data Base Procedures and Triggers, Tester should know that what is the Input and out put of the procedures/Triggers, Then execute Procedures and Triggers, if you get answer that Test Case will be pass other wise fail.
These requirements should get from DEVELOPER
No. I do not think so. Since the requirement seems to be ambiguous. The SRS should clearly mention the performance or transaction requirements i.e. It should say like ‘A DB retrieval rate of 5 micro sec’.
Question 6. How To Test A Dts Package Created For Data Insert Update And Delete? What Should Be Considered In The Above Case While Testing It? What Conditions Are To Be Checked If The Data Is Inserted, Updated Or Deleted Using A Text Files?
Data Integrity checks should be performed. IF the database schema is 3rd normal form, then that should be maintained.
Check to see if any of the constraints have thrown an error. The most important command will have to be the DELETE command. That is where things can go really wrong.
Most of all, maintain a backup of the previous database.
It depend on your application interface..
It depends on what level of testing you are doing.When you want to save something from front end obviously, it has to store somewhere in the database You will need to find out the relevant tables involved in saving the records.
Data Mapping from front end to the tables.Then enter the data from front end and save.
First the tester should to go through the requirement, as to why the particular stored procedure is written for.
Then check whether all the required indexes, joins, updates, deletions are correct comparing with the tables mentions in the Stored Procedure. And also he has to ensure whether the Stored Procedure follows the standard format like comments, updated by, etc.
Then check the procedure calling name, calling parameters, and expected reponses for different sets of input parameters.
Then run the procedure yourself with database client programs like TOAD, or mysql, or Query Analyzer Rerun the procedure with different parameters, and check results against expected values.
Finally, automate the tests with WinRunner.
Using output database check point and database check point , select SQL manual queries option and enter the “select” quires to retrieve data in the database and compare the expected and actual
Database testing is all about testing joins, views, imports and exports , testing the procedures, checking locks, indexing etc. Its not about testing the data in the database.
Usually database testing is performed by DBA.
The most important statement for database testing is the SELECT statement, which returns data rows from one or multiple tables that satisfies a given set of criteria.
You may need to use other DML (Data Manipulation Language) statements like INSERT, UPDATE and DELETE to manage your test data.
You may also need to use DDL (Data Definition Language) statements like CREATE TABLE, ALTER TABLE, and DROP TABLE to manage your test tables.
You may also need to some other commands to view table structures, column definitions, indexes, constraints and store procedures.
You have to do the following things while you are involving in Data Load testing.
Data base testing basically include the following.
In DB testing we need to check for,
In DB testing we need to check for,
Query analyzer can be used to check data loading. 3.
Test cases for database testing should include the following information: project name, module name, bug ID number, test objective, steps/actions, expected results, actual results, status, priority of defect, and severity of defect.
The following items are typically checked during database testing:
Non-editable fields through the database can be tested manually. For example, a field that is non-editable through the front end, should not allow a user to add a record to the database.
Inner join, Outer Join, Left outer join, Right outer join. And try to explain each of them if he gives time.
I can do that by using a DISTINCT keyword in my sql query. eg: select DISTINCT * from products where product_category = ‘Electronics’;
Yes, sql queries make a lot of impact on the whole performance of application. A poorly written sql query by a developer can take long time to generate a report or retrieve data from data base. So, we need to take few precautions while writing queries, as a database tester I will also review the queries written by a developer. For example: get rid of nested sql queries as much as possible and make use of joins.
Drill through is the process of going to the detail level data from summary data.
Consider the above example on retail shops. If the CEO finds out that sales in East Europe has declined this year compared to last year, he then might want to know the root cause of the decrease. For this, he may start drilling through his report to more detail level and eventually find out that even though individual shop sales has actually increased, the overall sales figure has decreased because a certain shop in Turkey has stopped operating the business. The detail level of data, which CEO was not much interested on earlier, has this time helped him to pin point the root cause of declined sales. And the method he has followed to obtain the details from the aggregated data is called drill through.
Slicing means showing the slice of a data, given a certain set of dimension (e.g. Product) and value (e.g. Brown Bread) and measures (e.g. sales).
Dicing means viewing the slice with respect to different dimensions and in different level of aggregations.
Slicing and dicing operations are part of pivoting.
A data warehouse usually captures data with same degree of details as available in source. The "degree of detail" is termed as granularity. But all reporting requirements from that data warehouse do not need the same degree of details.
To understand this, let's consider an example from retail business. A certain retail chain has 500 shops accross Europe. All the shops record detail level transactions regarding the products they sale and those data are captured in a data warehouse.
Each shop manager can access the data warehouse and they can see which products are sold by whom and in what quantity on any given date. Thus the data warehouse helps the shop managers with the detail level data that can be used for inventory management, trend prediction etc.
Now think about the CEO of that retail chain. He does not really care about which certain sales girl in London sold the highest number of chopsticks or which shop is the best seller of 'brown breads'. All he is interested is, perhaps to check the percentage increase of his revenue margin across Europe. Or may be year to year sales growth on eastern Europe. Such data is aggregated in nature. Because Sales of goods in East Europe is derived by summing up the individual sales data from each shop in East Europe.
Therefore, to support different levels of data warehouse users, data aggregation is needed.
A fact table stores some kind of measurements. Usually these measurements are stored (or captured) against a specific time and these measurements vary with respect to time. Now it might so happen that the business might not able to capture all of its measures always for every point in time. Then those unavailable measurements can be kept empty (Null) or can be filled up with the last available measurements. The first case is the example of incident fact and the second one is the example of snapshot fact.
A fact table that does not contain any measure is called a fact-less fact. This table will only contain keys from different dimension tables. This is often used to resolve a many-to-many cardinality issue.
Explanatory Note: Consider a school, where a single student may be taught by many teachers and a single teacher may have many students. To model this situation in dimensional model, one might introduce a fact-less-fact table joining teacher and student keys. Such a fact table will then be able to answer queries like,
Mini dimensions can be used to handle rapidly changing dimension scenario. If a dimension has a huge number of rapidly changing attributes it is better to separate those attributes in different table called mini dimension. This is done because if the main dimension table is designed as SCD type 2, the table will soon outgrow in size and create performance issues. It is better to segregate the rapidly changing members in different table thereby keeping the main dimension table small and performing.
SCD stands for slowly changing dimension, i.e. the dimensions where data is slowly changing. These can be of many types, e.g. Type 0, Type 1, Type 2, Type 3 and Type 6, although Type 1, 2 and 3 are most common. Read this article to gather in-depth knowledge on various SCD tables.
Dimensions are often reused for multiple applications within the same database with different contextual meaning. For instance, a "Date" dimension can be used for "Date of Sale", as well as "Date of Delivery", or "Date of Hire". This is often referred to as a 'role-playing dimension'.
A junk dimension is a grouping of typically low-cardinality attributes (flags, indicators etc.) so that those can be removed from other tables and can be junked into an abstract dimension table.
These junk dimension attributes might not be related. The only purpose of this table is to store all the combinations of the dimensional attributes which you could not fit into the different dimension tables otherwise. Junk dimensions are often used to implement Rapidly Changing Dimensions in data warehouse.
A degenerated dimension is a dimension that is derived from fact table and does not have its own dimension table.
A dimension key, such as transaction number, receipt number, Invoice number etc. does not have any more associated attributes and hence can not be designed as a dimension table.
A conformed dimension is the dimension that is shared across multiple subject area. Consider 'Customer' dimension. Both marketing and sales department may use the same customer dimension table in their reports. Similarly, a 'Time' or 'Date' dimension will be shared by different subject areas. These dimensions are conformed dimension.
Theoretically, two dimensions which are either identical or strict mathematical subsets of one another are said to be conformed.
In a data warehouse model, dimension can be of following types,
Based on how frequently the data inside a dimension changes, we can further classify dimension as
You may also read, Modeling for various slowly changing dimension and Implementing Rapidly changing dimension to know more about SCD, RCD dimensions etc.
DWBI Related Tutorials
|Informatica Tutorial||ETL Testing Tutorial|
|Data Warehouse ETL Toolkit Tutorial||Data Warehousing Tutorial|
DWBI Related Interview Questions
|Informatica Interview Questions||ETL Testing Interview Questions|
|Data Warehouse ETL Toolkit Interview Questions||Data Warehousing Interview Questions|
|PL/SQL and Informatica Interview Questions||Business intelligence Interview Questions|
|Informatica MDM Interview Questions||Informatica Admin Interview Questions|
|Clover Etl Interview Questions||Informatica Data Quality Interview Questions|
Data Warehousing Concepts
Hardware And I/o Considerations In Data Warehouses
Parallelism And Partitioning In Data Warehouses
Basic Materialized Views
Advanced Materialized Views
Overview Of Extraction, Transformation, And Loading
Extraction In Data Warehouses
Transportation In Data Warehouses
Loading And Transformation
Maintaining The Data Warehouse
Change Data Capture
Schema Modeling Techniques
Sql For Aggregation In Data Warehouses
Sql For Analysis And Reporting
Sql For Modeling
Using Parallel Execution
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.