This course contains the basics of Loadrunner

Course introduction
Test Your Caliber
Interview Questions
Pragnya Meter Exam


Load Test Planning

Developing a comprehensive test plan is a key to successful load testing. A clearly defined test plan ensures that the LoadRunner scenarios you develop will accomplish your load testing objectives.

About Load Test Planning

As in any type of system testing, a well-defined test plan is the first essential step to successful testing. Planning your load testing helps you to:

  • Build test scenarios that accurately emulate your working environment. Load testing means testing your application under typical working conditions, and checking for system performance, reliability, capacity, and so forth.
  • Understand which resources are required for testing.
    Application testing requires hardware, software, and human resources. Before you begin testing, you should know which resources are available and decide how to use them effectively.
  • Define success criteria in measurable terms.
    Focused testing goals and test criteria ensure successful testing. For example, it’s not enough to define vague objectives like “Check server response time under heavy load.”

A more focused success criterion would be “Check that 50 customers can check their account balance simultaneously, and that the server response time will not exceed one minute.” Load test planning is a three-step process:

Analyzing the Application

The first step to load test planning is analyzing your application. You should become thoroughly familiar with the hardware and software components, the system configuration, and the typical usage model. This analysis ensures that the testing environment you create using LoadRunner will accurately reflect the environment and configuration of the application under test.

Identifying System Components

Draw a schematic diagram to illustrate the structure of the application. If possible, extract a schematic diagram from existing documentation. If the application under test is part of a larger network system, you should identify the component of the system to be tested. Make sure the diagram includes all system components, such as client machines, network, middleware, and servers. The following diagram illustrates an online banking system that is accessed by many Web users. The Web users each connect to the same database to transfer funds and check balances. The customers connect to the database server through the Web, using multiple browsers.

Describing the System Configuration

Enhance the schematic diagram with more specific details. Describe the configuration of each system component. You should be able to answer the following questions:

  • How many users are anticipated to connect to the system?
  • What is the application client’s machine configuration (hardware, memory, operating system, software, development tool, and so forth)?
  • What types of database and Web servers are used (hardware, database type, operating system, file server, and so forth)?
  • How does the server communicate with the application client?
  • What is the middleware configuration and application server between the front-end client and back-end server?
  • What other network components may affect response time (modems and so forth)?
  • What is the throughput of the communications devices? How many concurrent users can each device handle?

For example, the schematic diagram above specified that there are multiple application clients accessing the system.

Analyzing the Usage Model

Define how the system is typically used, and decide which functions are important to test. Consider who uses the system, the number of each type of user, and each user’s common tasks. In addition, consider any background load that might affect the system response time.

For example, suppose 200 employees log on to the accounting system every morning, and the same office network has a constant background load of 50 users performing various word processing and printing tasks. You could create a LoadRunner scenario with 200 virtual users signing in to the accounting database, and check the server response time.

To check how background load affects the response time, you could run your scenario on a network where you also simulate the load of employees performing word processing and printing activities.

Task Distribution

In addition to defining the common user tasks, examine the distribution of these tasks. For example, suppose the bank uses a central database to serve clients across many states and time zones. The 250 application clients are located in two different time zones, all connecting to the same Web server. There are 150 in Chicago and 100 in Detroit. Each begins their business day at 9:00 AM, but since they are in different time zones, there should never be more than 150 users signing in at any given time. You can analyze task distribution to determine when there is peak database activity, and which activities typically occur during peakloadtime.

Defining Testing Objectives

Before you begin testing, you should define exactly what you want to accomplish. Following are common application testing objectives that LoadRunner helps you test, as described in Robert W. Buchanan, Jr’s The Art of Testing Network Systems (John Wiley & Sons, Inc., 1996).

A more detailed description of each objective appears at the end of this chapter.

Stating Objectives in Measurable Terms

Once you decide on your general load testing objectives, you should identify more focused goals by stating your objectives in measurable terms. To provide a baseline for evaluation, determine exactly what constitutes acceptable and unacceptable test results.

For example:

General Objective - Product Evaluation: choose hardware for the Web server.

Focused Objective - Product Evaluation: run the same group of 300 virtual users on two different servers, HP and NEC. When all 300 users simultaneously browse the pages of your Web application, determine which hardware gives a better response time.

Deciding When to Test

Load testing is necessary throughout the product life cycle. The following table illustrates what types of tests are relevant for each phase of the product life cycle:

Planning LoadRunner Implementation

The next step is to decide how to use LoadRunner to achieve your testing goals.

Defining the Scope of Performance Measurements

You can use LoadRunner to measure response time at different points in the application. Determine where to run the Vusers and which Vusers to run according to the test objectives:

  • Measuring end-to-end response time:
    You can measure the response time that a typical user experiences by running a GUI Vuser or RTE Vuser at the front end. GUI Vusers emulate real users by submitting input to and receiving output from the client application; RTE Vusers emulate real users submitting input to and receiving output from a character-based application. You can run GUI or RTE Vusers at the front end to measure the response time across the entire network, including a terminal emulator or GUI front end, network, and server.
  • Measuring network and server response times: You can measure network and server response time, excluding response time of the GUI front end, by running Vusers (not GUI or RTE) on the client machine. Vusers emulate client calls to the server without the user interface. When you run many Vusers from the client machine, you can measure how the load affects network and server response time.
  • Measuring GUI response time:
    You can determine how the client application interface affects response time by subtracting the previous two measurements:
    GUI response time = end-to-end - network and server
  • Measuring server response time:
    You can measure the time it takes for the server to respond to a request without going across the network. When you run Vusers on a machine directly connected to the server, you can measure server performance.
  • Measuring middleware-to-server response time:
    You can measure response time from the server to middleware if you have access to the middleware and its API. You can create Vusers with the middleware API and measure the middleware-server performance.

Defining Vuser Activities

Create Vuser scripts based on your analysis of Vuser types, their typical tasks and your test objectives. Since Vusers emulate the actions of a typical enduser, the Vuser scripts should include the typical end-user tasks. For example, to emulate an online banking client, you should create a Vuser script that performs typical banking tasks. You would browse the pages that you normally visit to transfer funds or check balances.

You decide which tasks to measure based on your test objectives and definetransactionsfor these tasks. Transactions measure the time that it takes for the server to respond to tasks submitted by Vusers (end-to-end time). For example, to check the response time of a bank Web server supplying an account balance, define a transaction for this task in the Vuser script. In addition, you can emulate peak activity by using rendezvouspointsin your script. Rendezvous points instruct multiple Vusers to perform tasks at exactly the same time. For example, you can define a rendezvous to emulate 70 users simultaneously updating account information.

Selecting Vusers

Before you decide on the hardware configuration to use for testing, determine the number and type of Vusers required. To decide how many Vusers and which types to run, look at the typical usage model, combined with the testing objectives. Some general guidelines are:

  • Use one or a few GUI users to emulate each type of typical user connection.
  • Use RTE Vusers to emulate terminal users.
  • Run multiple non-GUI or non-RTE Vusers to generate the rest of the load for each user type. For example, suppose that you have five kinds of users, each performing a different business process:

Choosing Testing Hardware/Software

The hardware and software should be powerful and fast enough to emulate the required number of virtual users. To decide on the number of machines and correct configuration, consider the following:

  • It is advisable to run the LoadRunner Controller on a separate machine.
  • Each GUI Vuser requires a separate Windows-based machine; several GUI Vusers can run on a single UNIX machine.
  • Configuration of the test machine for GUI Vusers should be as similar as possible to the actual user’s machine.

Refer to the following tables to estimate the required hardware for each LoadRunner testing component. These requirements are for optimal performance.

Windows Configuration Requirements

Note: The results file requires a few MB of disk space for a long scenario run with many transactions. The load generator machines also require a few MB of disk space for temporary files if there is no NFS.

UNIX Configuration Requirements

Note: The results file requires a few MB of disk space for a long scenario run with many transactions. The load generator machines also require a few MB of disk space for temporary files if there is no NFS.

Examining Load Testing Objectives

Your test plan should be based on a clearly defined testing objective. This section presents an overview of common testing objectives:

  • Measuring End-User Response Time
  • Defining Optimal Hardware Configuration
  • Checking Reliability
  • Checking Hardware or Software Upgrades
  • Evaluating New Products
  • Identifying Bottlenecks
  • Measuring System Capacity

Measuring End-User Response Time

Check how long it takes for the user to perform a business process and receive a response from the server. For example, suppose that you want to verify that while your system operates under normal load conditions, the end users receive responses to all requests within 20 seconds. The following graph presents a sample load vs. response time measurement for a banking application:

Defining Optimal Hardware Configuration

Check how various system configurations (memory, CPU speed, cache, adaptors, modems) affect performance. Once you understand the system architecture and have tested the application response time, you can measure the application response for different system configurations to determine which settings provide the desired performance levels. For example, you could set up three different server configurations and run the same tests on each configuration to measure performance variations:

  • Configuration 1: 200MHz, 64MB RAM
  • Configuration 2: 200MHz, 128MB RAM
  • Configuration 3: 266MHz, 128MB RAM

Checking Reliability

Determine the level of system stability under heavy or continuous work loads. You can use LoadRunner to create stress on the system: force the system to handle extended activity in a compressed time period to simulate the kind of activity a system would normally experience over a period of weeks or months.

Checking Hardware or Software Upgrades

Perform regression testing to compare a new release of hardware or software to an older release. You can check how an upgrade affects response time (benchmark) and reliability. Application regression testing does not check new features of an upgrade; rather it checks that the new release is as efficient and reliable as the older release.

Evaluating New Products

You can run tests to evaluate individual products and subsystems during the planning and design stage of a product’s life cycle. For example, you can choose the hardware for the server machine or the database package based on evaluation tests.

Identifying Bottlenecks

You can run tests that identify bottlenecks on the system and determine which element is causing performance degradation, for example, file locking, resource contention, and network overload. Use LoadRunner in conjunction with the new network and machine monitoring tools to create load and measure performance at different points in the system.

Measuring System Capacity

Measure system capacity, and determine how much excess capacity the system can handle without performance degradation. To check capacity, you can compare performance versus load on the existing system, and determine where significant response-time degradation begins to occur. This is often called the “knee” of the response time curve.

Once you determine the current capacity, you can decide if resources need to be increased to support additional users.