User Interface Design Testing - Web Service Testing

User interface design testing evaluates how well a design takes care of its users, offers clear direction, delivers feedback, and maintains consistency of language and approach. Subjective impressions of ease of use and look and feel are carefully considered in UI design testing. Issues pertaining to navigation, natural flow, usability, commands, and accessibility are also assessed in UI design testing.

During UI design testing, you should pay particular attention to the suitability of all aspects of the design. Look for areas of the design that lead users into errors or that do not clearly indicate what is expected of users.

Consistency of aesthetics, feedback, and interactivity directly affect an application's usability—and should therefore be carefully examined. Users must be able to rely on the cues they receive from an application to make effective navigation decisions and understand how best to work with an application. When cues are unclear, communication between users and applications can break down.

It is essential to understand the purpose of the software under test (SUT) before beginning UI testing. The two main issues to consider are:

  1. Who is the application's target user?
  2. What design approach has been employed?

With answers to these questions, you will be able to identify program functionality and design that do not behave as a reasonable target user would expect they should. Keep in mind that UIs serve users, not designers or programmers. As testers, we represent users and must be conscious of their needs.

Profiling the Target User
Gaining an understanding of a Web application's target user is central to evaluating the design of its interface. Without knowing the user's characteristics and needs, it can be a challenge to assess how effective the UI design is.

User interface design testing involves the profiling of two target-user types: (1) server-side users and, more important, (2) client-side users. Users on the client side generally interact with Web applications through a Web browser. More than likely they do not have as much technical and architectural knowledge as users on the server side of the same system. Additionally, the application features that are available to client-side users often differ from the features that are available to server-side users (who are often system administrators).

Therefore, client-side UI testing and server-side UI testing should be evaluated by different standards. When creating a user profile, consider the following four categories of criteria (for both client-side and server-side users).

Computer Experience
How long has the intended user been using a computer? Do they use a computer professionally or only casually at home? What activities are they typically involved with? What assumptions does the SUT make about user skill level, and how well do the expected user's knowledge and skills match those assumptions?

For client-side users, technical experience may be quite limited, but the typical user may have extensive experience with a specific type of application, such as a spreadsheet, word processor, desktop presentation program, drawing program, or instructional development software. In contrast, system administrators and information services (IS) personnel who install and set up applications on the server side probably possess significant technical experience, including in-depth knowledge of system configuration and script-level programming. They may also have extensive troubleshooting experience, but limited experience with typical end-user application software.

Web Experience
How long has the user been using the Web system? Web systems occasionally require client-side users to configure browser settings. Therefore, some experience with Web browsers will be helpful. Is the user familiar with Internet jargon and concepts, such as Java, ActiveX, HyperText Markup Language (HTML), proxy servers, and so on? Will the user require knowledge of related helper applications such as Acrobat reader, File Transfer Protocol (FTP), and streaming audio/video clients? How much Web knowledge is expected of server-side users? Do they need to modify Practical Extraction and Reporting Language (perl) or Common Gateway Interface (CGI) scripts?

Domain Knowledge
Is the user familiar with the subject matter with which the application is associated? For example, if the program involves building formulas into spreadsheets, it is certainly targeted at client-side users with math skills and some level of computing expertise. It would be inappropriate to test such a program without the input of a tester who has experience working with spreadsheet formulas.

Another example includes the testing of a music notation–editing application. Determining if the program is designed for experienced music composers who understand the particulars of musical notation, or for novice musicians who may have little to no experience. with music notation, is critical to evaluating the effectiveness of the design. Novice users want elementary tutorials, and expert users want efficient utilities. Is the user of an e-commerce system a retailer who has considerable experience with credit card–"processing practices? Is the primary intended user of an online real estate system a realtor who understands real estate listing services, or is it a first-time home buyer?

Application-Specific Experience
Will users be familiar with the purpose and abilities of the program because of past experience? Is this the first release of the product, or is there an existing base of users in the marketplace who are familiar with the product? Are there other popular products in the marketplace that have a similar design approach and functionality? Keep in mind that Web applications are still a relatively new class of application. It is possible that you are testing a Web application that is the first of its kind to reach the marketplace.

Consequently, target users may have substantial domain knowledge but no application-specific experience.With answers to these questions, you should be able to identify the target user for whom an application is designed. There may be several different target users. With a clear understanding of the application's target users, you can effectively evaluate an application's interface design and uncover potential UI errors.
Table offers a means of grading the four attributes of target-user experience. User interface design should be judged, in part, by how closely the experience and skills of the target user match the characteristics of the SUT.

Once we have a target-user profile for the application under test, we will be able to determine if the design approach is appropriate and intuitive for its intended users. We will also be able to identify characteristics of the application that make it overly difficult or simple. Overly simplistic design can result in as much loss of productivity as an overly complex design can. Consider the bug-report screen in the sample application. It includes numerous data-entry fields. Conceivably, the design could have broken up the functionality of the bug-report screen over multiple screens. Although such a design might serve novice users, it would unduly waste the time of more experienced users—the application's target.

Evaluating Target-User Experience

Evaluating Target-User Experience

Testing the Sample Project
Consider the target user of the sample application. The sample application is designed to support the efforts of software development teams. When we designed the sample application, we assumed that the application's target user would have, at a minimum, intermediate computing skills, at least beginning-level Web experience, and intermediate experience in the application's subject matter (bug tracking). We also assumed that the target user would have at least beginning experience with applications of this type.

Beyond these minimum experience levels, we knew that it was also possible that the target user might possess high experience levels in any or all of the categories. Table shows how the sample application's target user can be rated.

Evaluating Sample Application Target User

Evaluating Sample Application Target User

Considering the Design
The second step in preparing for UI design testing is to study the design employed by the application. Different application types and target users require different designs. For example, in a program that includes three branching options, a novice computer user might be better served by delivering the three options over the course of five interface screens, via a wizard. An information services (IS) professional, on the other hand, might prefer receiving all options on a single screen, so that he or she could access them more quickly.

TOPICS TO CONSIDER WHEN EVALUATING DESIGN

  • Design approach (discussed in the following section)
  • User interaction (data input)
  • Data presentation (data output)

Design Approach
Design metaphors are cognitive bridges that can help users understand the logic of UI flow by relating them to experiences that users may have had in the real world, or in other places. An example of an effective design metaphor includes Web directory sites that utilize a design reminiscent of a library card catalog. Another metaphor example includes scheduling applications that visually mirror the layout of a desktop calendar and address book. Microsoft Word uses a document-based metaphor for its word-processing program—a metaphor that is common to many types of applications.

EXAMPLES OF TWO DIFFERENT DESIGN METAPHORS

  • Figure depicts an application that utilizes a document-based metaphor. This metaphor includes a workspace where data can be entered and manipulated in a way that is similar to writing on a piece of paper.
  • Figure exemplifies a device-based metaphor. This virtual calculator includes UI controls that are designed to receive user input and perform functions.

TWO DIFFERENT APPROACHES TO CONVEY IDENTICAL INFORMATION AND
COMMANDS

  • Figure conveys navigation options to users via radio buttons at the top of the interface screen.
  • Figure conveys the same options via an ActiveX pull-down menu.

Document-based metaphor

Document-based metaphor

Device-based metaphor

Device-based metaphor

Neither design approach is more correct than the other. They are simply different. Regardless of the design approach employed, it is usually not our role as testers to judge which design is best. However, that does not mean that we should overlook design errors, especially if we work for an organization that really cares about subjective issues such as usability. Our job is to point out as many design deficiencies early in the testing as possible. Certainly, it is our job to point out inconsistency in the implementation of the design. That is, if the approach is using a pull-down menu as opposed to using radio buttons, a pull-down menu should then be used consistently in all views.

Think about these common issues:

  • Keep in mind that the UI tags, controls, and objects supported by HTML are primitive compared with those available through the Graphical User Interface (GUI) available on Microsoft Windows or Macintosh operating systems. If the designer intends to use the Windows UI metaphor, look for design deficiencies.
  • If you have trouble figuring out the UI, chances are it's a UI error because your end users would go through the same experience.
  • The UI was inadvertently designed for the designers or developers rather than for the end users.
  • The important features are misunderstood or hard to find.
  • Users are forced to think in terms of the design metaphor from the designer's perspective, although the metaphor itself is difficult to relate to in real-life experience.
  • Different terms were used to describe the same functionality.

Navigation options via radio buttons

Navigation options via radio buttons
Ask yourself these questions:

  • Is the design of the application under test appropriate for the target audience?
  • Is the UI intuitive (you don't have to think too much to figure out how to use the product) for the target audience?
  • Is the design consistently applied throughout the application?
  • Does the interface keep the user in control, rather than reacting to unexpected UI events?
  • Does the interface offer pleasing visual design (look and feel) and cues for operating the application?
  • Is the interface simple to use and understand?
  • Is help available from every screen?
  • Will usability tests be performed on the application under test? If yes, will you be responsible for coordinating or conducting the test? This is a time-consuming process, and it has to be very well planned.

Navigation options via pull-down menu

Navigation options via pull-down menu

User Interaction (Data Input)
Users can perform various types of data manipulation through keyboard and mouse events. Data manipulation methods are made available through on-screen UI controls and other technologies, such as cut-and-paste and drag-and-drop.

User Interface Controls
User interface controls are graphic objects that enable users to interact with applications. They allow users to initiate activities, request data display, and specify data values. Controls, commonly coded into HTML pages as form elements, include radio buttons, check boxes, command buttons, scroll bars, pull-down menus, text fields, and more. Figure 9.5 includes a standard HTML text box that allows limited text input from users, and a scrolling text box that allows users to enter multiple lines of text. Click-softing the Submit button beneath these boxes submits the entered data to a Web server. The Reset buttons return the text boxes to their default state.

Radio buttons are mutually exclusive—only one radio button in a set can be selected at one time. Check boxes, on the other hand, allow multiple options in a set to be selected simultaneously. Figure includes a pull-down menu that allows users to select one of multiple predefined selections. Clicking the Submit button submits the user's selection to the Web server. The Reset button resets the menu to its default state. The pushbuttons (Go Home and Search) initiate actions (e.g., CGI scripts, search queries, submit data to a database, hyperlinks, etc.).

Figure also includes examples of images (commonly referred to as graphics or icons) that can serve as hyperlinks or simulated pushbuttons.

Form-based HTML UI controls, including a standard HTML text box and a scrolling text box

Form-based HTML UI controls, including a standard HTML text box and a scrolling text box

Form-based HTML UI controls: including a pull-down menu

Form-based HTML UI controls: including a pull-down menu

Figures illustrate the implementation of several standard HTML UI controls on a Web page. Figure shows the objects (graphic link, mouse-over link titles or ALT, and a text link) as they are presented to users. Figure shows the HTML code that generates these objects.

Standard HTML controls, such as tables and hyperlinks, can be combined with images to simulate conventional GUI elements such as those found in Windows and Macintosh applications (navigation bars, command buttons, dialog boxes, etc.). The left side of Figure (taken from the sample application) shows an HTML frame that has been combined with images and links to simulate a conventional navigation bar.

Dynamic User Interface Controls
The HTML multimedia tags enable the use of dynamic UI objects, such as Java applets, ActiveX controls, and scripts (including JavaScript and VBScript). Scripts

Scripts are programming instructions that are executed by browsers when HTML pages load or when they are called based on certain events. Some scripts are a form of object-oriented programming, meaning that program instructions identify and send instructions to individual elements of Web pages (buttons, graphics, HTML forms, etc.), rather than to pages as a whole. Scripts do not need to be compiled and can be inserted directly into HTML pages. Scripts are embedded into HTML code with <SCRIPT> tags. Scripts can be executed on either the client side or the server side. Client-side scripts are often used to dynamically set values for UI controls, modify Web page content, validate data, and handle errors.

Graphic links, mouse-over text, and text links

Graphic links, mouse-over text, and text links

HTML code for graphic links, mouse-over text, and text links

HTML code for graphic links, mouse-over text, and text links

Tables, forms, and frames simulating Windows-based UI controls

Tables, forms, and frames simulating Windows-based UI controls

There are a number of scripting languages supported by popular browser. Some browsers support particular scripting languages and exclude others. JavaScript, produced by Netscape, is one of the more popular scripting languages. Other popular scripting languages include Microsoft's version of JavaScript (Jscript) and Visual Basic Script (VBScript).

Java
Java is a computing language developed by Sun Microsystems that allows applications to run over the Internet (though Java objects are not limited to running over the Internet).

Java is a compiled language, which means that it must be run through a compiler to be translated into a language that computer processors can use. Unlike other compiled languages, Java produces a single compiled version of itself, called Java bytecode. Bytecode is a series of tokens and data that are normally interpreted at runtime. By compiling to this intermediate language rather than to binaries that are specific to a given type of computer, a single Java program can be run on several different computer platforms for which there is a Java Virtual Machine (Java VM). Once a Java program has been compiled into bytecode, it is placed on a Web server. Web servers deliver bytecode to Web browsers, which interpret and run the code.

Java programs designed to run inside browsers are called applets. When a user navigates to a Web site that contains a Java applet, the applet automatically downloads to the user's computer. Browsers require Java bytecode interpreters to run applets. Java-enabled browsers,such as Netscape Navigator and Internet Explorer, have Java bytecode interpreters built into them.Precautions are taken to ensure that Java programs do not download viruses onto the user's computers. Java applets must go through a verification process when they are first downloaded to users' machines—to ensure that their bytecode can be run safely. After verification, bytecode is run within a restricted area of RAM on users' computers.

ActiveX

ActiveX is a Windows custom control that runs within ActiveX-enabled browsers (such as Internet Explorer), rather than off servers. Similar to Java applets, ActiveX controls support the execution of event-based objects within a browser.

One major benefit of ActiveX controls is that they are components. Components can be easily combined with other components to create new, features-rich applications. Another benefit is that once a user downloads an ActiveX control, he or she will not have to download it again in the future; ActiveX controls remain on users' systems, which can speed up load time for frequently visited Web pages.

Some disadvantages of ActiveX are that it is dependent on the Windows platform, and some components are so big that they use too much system memory. ActiveX controls, because they reside on client computers and generally require an installation and registration process, are considered by some to be intrusive. Figure 9.10 shows a calendar system ActiveX control. Figure shows the HTML code that generated the page in Figure. An HTML <OBJECT> tag gives the browser the ActiveX control class ID so that it can search the registry to determine the location of the control and load it into memory.

Calendar system ActiveX control

Calendar system ActiveX control

HTML code that generated the ActiveX control

HTML code that generated the ActiveX control

Sometimes, multiple ActiveX controls are required on the same HTML page. In such instances, controls may be stored on the same Web server, or on different Web servers.

Server-Side Includes
Server-side includes (SSIs) are directives to Web servers that are embedded in HTML comment tags. Web servers can be configured to examine HTML documents for such comments and to perform appropriate processes when they are detected. The SSIs are typically used to pull additional content from other sources into Web pages—for example, the addition of current date and time information. Following is an example of an SSI (enclosed between HTML comment tags) requesting that the Web server call a CGI script named mytest.cgi. <!--#exec cgi="/cgi-bin/mydir/mytest.cgi"-->

Style Sheet

Style sheets are documents that define style standards for a given set of Web pages. They are valuable in maintaining style consistency across multiple Web pages. Style sheets allow Web designers to define design issues such as fonts and colors from a central location, thus freeing designers from concerns over inconsistent graphic presentation that might result from browser display differences or developer oversight.

Style sheets set style properties for a variety of HTML elements: text style, font size and face, link colors, and more. They also define attribute units such as length, percentage, and color. The problem with traditional style sheets is that they do not take the dynamic nature of Web design into account. Web pages themselves offer multiple means of defining styles without the use of style sheets—for example, style properties can be defined in an HTML page's header, or inline in the body of an HTML document. Such dynamic style definition can lead to conflicting directives.

Cascading style sheets (CSS) is the most common and most mature style sheet language. Cascading style sheets offer a system for determining priority when multiple stylistic influences are directed onto a single Web page element.

Cascading style sheets dictate the style rules that are to be applied when conflicting directives are present. Cascading style sheets allow Web designers to manage multiple levels of style rules over an unlimited number of Web pages. For example, a certain line of text on a Web page might be defined as blue in the page's header, as red in the page's body text (inline), and as black in an external style sheet. In this scenario, CSS could establish a hierarchy of priority for the three conflicting style directives. The CSS could be set up to dictate that inline style commands take priority over all other style commands. Following that in priority might be ''page-wide" style commands (located in page headers). Finally, external style sheet commands might hold the least influence of the three style command types. There are different means of referencing style sheets. The browser takes all style information (possibly conflicting) and attempts to interpret it. Figure shows a mixture of styles applied to a page. Some of the approaches may be incompatible with some browsers.

Mixed styles.

HTML code that generated the ActiveX control

Some errors that you should look for include:

  • The default state of UI controls is incorrect.
  • Poor choice of default state.
  • The updated state of UI control is incorrect.
  • The default input value is incorrect.
  • Poor choice of default value.
  • The updated input value is incorrect.
  • The initial input focus is not assigned to the most commonly used control.
  • The most commonly used action button is not the default one.
  • The form or dialog box is too big under minimum support display resolution (e.g., 800 × 600).
  • The HTML code is often generated dynamically. It's essential to understand how the HTML code is generated. Don't assume that you have already tested "that" page, so you won't have to do it again until something changes.
  • Set View Text Size to Largest and the Smallest to see how each setting may affect the UI
  • Check for the existence of ALT attributes.
  • Check for correct descriptions in ALT attributes.
  • Avoid reporting multiple broken links or missing images used by the same error (e.g., the same image used in 20 HTML pages is missing).
  • Invalid inputs are not detected and handled at client side.
  • Invalid inputs are not detected and handled at server side.
  • Scripts are normally used to manipulate standard (e.g., set input focus, set default state, etc.) UI (form) controls. This is a tedious program chore and the process normally produces errors.Look for them.
  • Scripts, CSS, Java applets, and ActiveX controls commonly cause incompatibility errors among different releases of browser produced by different vendors. Make sure to run compatibility tests for all supported browsers.
  • If your application uses scripts, Java applets, and ActiveX controls, and the users might have disabled one or more of these features, can your application function at some capacity (or it will simply stop functioning)?
  • To test for script (such as JavaScript) incompatibility problems between different browser brands and versions, we first need to identify which pages use script, and for what purposes. Once these pages are cataloged, run these pages through one of the HTML authoring tools that has built-in support for checking script incompatibility based on static analysis method. The one that I am familiar that provides this support with is Macromedia's Dreamweaver.
  • Will the Web pages display correctly on handheld devices which often do not support graphics, and have relatively small screen "real estate"?

Navigation Methods
Navigation methods dictate how users navigate through a program—from one UI control to another within the same page (screen, window, or dialog box), and from one page to the next. User navigation is achieved through input devices, such as keyboard and mouse. Navigation methods are often evaluated by how easily they allow users to get to commonly used features and data.

Ask yourself these questions:

  • Is the application's navigation intuitive?
  • Is accessibility to commonly used features and data consistent throughout the program?
  • Can users always tell where they are in the program and what navigation options are available to them?
  • How well is information presented to the user?
  • If the program utilizes a central workspace, does the workspace remain consistent from screen to screen?
  • Do navigation conventions remain consistent throughout the application (navigation bars, menus, hyperlinks, etc.)?
  • Examine the application for consistent use of mouse-over pop-ups, clicks, and object dragging. Do the results of these actions offer differing results from one screen to the next?
  • Do the keyboard alternatives for navigation remain consistent throughout the application?
  • Are all features accessible via both mouse and keyboard action?
  • Click the Tab button repeatedly and examine the highlight path that is created. Is it logical and consistent?
  • Click the Shift-Tab button repeatedly and examine the highlight path that is created. Is it logical and consistent?
  • Look at the keyboard shortcuts that are supported. Are they functioning? Is there duplication among them?
  • If the user clicks a credit card payment button on an e-commerce site numerous times while he or she is waiting for server response, will the transaction erroneously be submitted numerous times?

Testing the Sample Application
User navigation within the sample application is achieved via standard UI controls (keyboard and mouse events). Data updates are submission based, meaning that they are achieved by clicking action buttons, such as Submit. diagrams how users navigate through the sample application's trend metrics and distribution metrics features.

Sample application navigation

Sample application navigation

Mouse/Keyboard Action Matrices
Appendices D and E contain test matrices that detail mouse and keyboard actions. These matrices can be customized to track navigation test coverage for the Web system under test.

Action Commands
Occasionally, the names of on-screen commands are not used consistently throughout an application. This is partially attributable to the fact that the meaning of command names often varies from one program to the next. If the nomenclature of certain commands varies within a single program, user confusion is likely to result. For example, if a Submit command is used to save data in one area of a program, then the Submit command name should be used for all saving activities throughout the application.

Consideration should be given to the action commands that are selected as the default commands. Default action commands should be the least risky of the available options (the commands least likely to delete user-created data).
Table lists a number of common confirming-action and canceling-action commands, along with their meanings and the decisions that they imply.

Confirming and Canceling Commands

Confirming and Canceling Commands

Feedback and Error Messages
Consistency in audible and visible feedback is essential for maintaining clear communication between users and applications. Messages (both visible and audible), beeps, and other sound effects must remain consistent and user friendly to be effective. Error messaging in particular should be evaluated for clarity and consistency.

Examine the utilization of interface components within feedback for unusual or haphazard implementations. One can identify commonly accepted guidelines within each computing platform for standard placement of UI elements, such as placing OK and Cancel buttons in the bottom right corner of dialog boxes. Alternate designs may make user interaction unnecessarily difficult.

Two types of message-based feedback are available. Figure illustrates a typical client-based error message (generated by error-checking JavaScript on the client side) that utilizes a browser-based message box. Figure shows typical server-based feedback. Client-based error messages are generally more efficient and cause less strain on servers than do server-based error messages. Server-based error messages require that data first be sent from the client to the server and then returned from the server back to the client where the error message is displayed to the user.

Client-based error messages, on the other hand, using script (such as JavaScript) embedded in an HTML page, can prevent such excessive network traffic by identifying errors and displaying error messages locally, without requiring contact with the server. Because scripting languages such as JavaScript behave differently with each browser version, testing of all supported platforms is essential.

As a general rule, simple errors such as invalid inputs should be detected and handled at the client side. The server, of course, has to detect and handle error conditions that do not become apparent until they interfere with some process being executed on the server side. Another consideration is that, sometimes, the client might not understand the error condition being responded to by the server, and it might therefore ignore the condition, or display the wrong message, or display a message that no human can understand.

Browser-based error message

Browser-based error message

Server-based feedback

Server-based feedback

Additionally, the client might not switch to the appropriate state or change the affected data items in the right way unless it understands the error condition reported by the server. Some errors to look for include the following:

  • Displaying incorrect error message for the condition.
  • Missing error messages.
  • Poorly worded, grammatically incorrect, and misspelled errors.
  • Messages were not written for the user and, therefore, are not useful to the user. For example, "Driver error 80004005."
  • Error message is not specific nor does it offer a plausible solution.
  • Similar errors are handled by different error messages.
  • Unnecessary messages distract users.
  • Inadequate feedback or error communication to users.
  • Handling methods used for similar errors are not consistent.

Ask yourself these questions:

  • Does the UI cause deadlocks in communication with the server (creating an infinite loop)?
  • Does the application allow users to recover from error conditions, or must the application be shut down?
  • Does the application offer users adequate warning and options when they venture into error-prone activities?
  • Are error messages neutral and consistent in tone and style?
  • Is there accompanying text for people who are hearing-impaired or have their computer's sound turned off?
  • If video is used, do picture and sound stay in sync?

Data Presentation (Data Output)
In Web applications, information can be communicated to users via a variety of UI controls (e.g., menus, buttons, check boxes, etc.) that can be created within an HTML page (frames, tables, simulated dialog boxes, etc.).

Figure illustrate three data presentation views that are available in the sample application. Each view conveys the same data through a different template built using HTML frames and tables.

In this sample application example, there are at least three types of potential errors: (1) data errors (incorrect data in records caused by write procedures), (2) database query errors, and (3) data presentation errors. A data error or database query error will manifest itself in all presentations, whereas a presentation error in server-side scripts will manifest itself only in the presentation with which it is associated. Figure illustrates the data presentation process. Where errors manifest themselves depends on where the errors occur in the process.

Single issue report presented in Full View

Single issue report presented in Full View

Same issue report presented in Edit View

Same issue report presented in Edit View

Analyze the application to collect design architectural information. One of the most effective ways to do this is to interview your developer. Once the information is collected, use it to develop test cases that are more focused at the unit level, as well at the interoperability level.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Web Service Testing Topics