Questionnaire Design - Marketing Research

To some extent, writing questions that both motivate and get at the truth is a skill acquired only by considerable practice. It is not something any intelligent person can automatically do. One way the low-budget researcher can appropriate experience quickly is to borrow questions from others, preferably questions used by several other researchers. Using questions from secondary sources not only ensures that the questions have been pretested; it also guarantees that a database will exist elsewhere to which the researcher can compare the present results.

The U.S. Census is a good source of such questions, in part because its categories (for example, for income or occupations) are the ones used by most researchers and in part because the Census Bureau provides vast quantities of data against which to validate the researcher’s own work.
Once the borrowing possibilities have been exhausted, the neophyte researcher should seek the help of an expert question writer if the cost is affordable. Alternatively, once a questionnaire is prepared, the draft instrument should be reviewed by as many colleagues as possible, especially those who will be critical. Finally, the researcher should test the instrument with potential respondents even if it is only the office staff and in-laws. I have never yet written a questionnaire that did not have major flaws, ambiguities, and even missing categories despite the fact I was sure that each time I had finally done it right. It takes a thorough pretest to bring these problems out. My own preference is to continue pretesting each redraft until I am confident the instrument is right. I keep reminding myself that if I do not measure whatever I am studying validly at the start, all the subsequent analysis and report writing I might do will be wasted.Following are possible questionnaire biases that could crop up in research instruments.

Question Order Bias
Sometimes questions early in a questionnaire can influence later ones. For example, asking someone to rank a set of criteria for choosing among alternative service outlets makes it very likely that a later request for a ranking of these same outlets will be influenced by the very criteria already listed. Without the prior list, the respondent may have performed the evaluation using fewer or even different criteria. The solution here is to try different orderings during a pretest and see whether the order makes any difference. If it does, then the researcher should either place the more important question first or change the order in every other questionnaire (called rotating the questions) to balance the effects overall.
A more obvious questionnaire order effect is what might be called giving away the show. This problem seldom survives an outside review of the instrument or a pretest. However, I have seen first drafts of a questionnaire where, for example, wording that mimicked an advertising campaign was used as one of the dimensions for evaluating a political candidate. Later, a question asking for recall of advertising themes got a surprisingly high unaided recall of that particular theme.

A third kind of questionnaire order effect involves threatening questions that if asked early can cause a respondent to clam up or terminate the interview altogether. If a researcher must ask questions about sex, drugs, diseases, or income, it is better to leave these as late as possible in the instrument.

A final order effect applies to lengthy questionnaires. As respondents tire, they give shorter and less carefully thought out answers. Here again, put the more important questions early or rotate the questions among questionnaires.

Answer Order Bias

There is one major problem when respondents are given a choice of precoded categories to answer to a question: a tendency for respondents, other things equal, to give higher ratings to alternatives higher on a list than those lower on a list. In such instances, pretesting and (usually) rotation of answers are recommended. Practically, rotation is achieved during face-to-face or telephone interviews by having the supervisor highlight different precoded answer categories where the interviewer is to begin reading alternatives. (A CATI computer or Internet survey can do this automatically.) On mail questionnaires, the researcher must have the word processor reorder the alternatives and print out several versions of the questionnaire to be mailed out randomly.

Scaling Bias

Wording and formatting of individual questions that attempt to scale attitudes, preferences, and the like can be an important source of bias. If the researcher must construct his or her own scales, the best approach is to use one of a number of pretested general techniques that can be customized for a specific study.

Thurstone Scales. In this approach, a large number of statements about an object of interest (such as a company, a charity, or a brand) are sorted by expert judges into nine or eleven groups separated along some prespecified dimension such as favorableness. The groups or positions are judged by the experts to be equally far from each other. The researcher then selects one or two statements from each group to represent each scale position. The final questionnaire presents respondents with all statements and asks them to pick the one that best portrays their feelings about each object. Their choices are assigned the rating given by the judges to that statement. The ratings are assumed to be interval scaled.

Likert Scales. A problem with Thurstone scales is that they do not indicate how intensely a respondent holds a position. Likert scaling gives respondents a set of statements and asks them how much they agree with each statement, usually on a five-point continuum:

  1. strongly agree,
  2. some what agree,
  3. neither agree nor disagree,
  4. somewhat disagree, or
  5. strongly disagree

Responses to a selected series of such statements are then analyzed individually or summed to yield a total score. Likert scales are very popular, in part because they are easy to explain and to lay out on a questionnaire. They are also very easy to administer in telephone interviews. One problem with the technique is that the midpoint of a Likert scale is ambiguous. It can be chosen by those who truly don’t know and by those who are indifferent. For this reason, some researchers allow respondents a sixth option, “don’t know,” so that the midpoint will really represent indifference.

Semantic Differential. Respondents are asked to evaluate an object such as a company, nonprofit organization, or brand on a number of dimensions divided into segments numbered from 1 to 9 or 1 to 11. In contrast to Likert scales, positions are not labeled. Rather, the scales are anchored on each end with opposing (semantically different) adjectives or phrases—for example:

Respondents indicate where on each scale they perceive the object in question to be. Two problems are common with semantic differentials. First, there is again the confusion of whether the midpoint of the scale represents indifference or ignorance. Second, there is the problem that the anchors may not be true opposites; for example, is the opposite of healthy “unhealthy” or “sick”?

Stapel Scale. Some dimensions on which the researcher may wish to rate something may not have obvious opposites, for example, “fiery,” “cozy,” or “classic.” Stapel scales were designed for this contingency.
The interviewer asks the respondents to indicate the degree to which a particular adjective applies to an object in question. Usually Stapel scales are easier to explain over the telephone than semantic differentials and require little pretesting.

Graphic Scales. If the respondent can be shown a scale graphically, for example, in a mail, Internet, self-report, or face-to-face interview study, then a scale where the positions look equal can be used. Researchers sometimes use a ladder to represent social class dimensions along which respondents are asked to place themselves.

The ladder can also be used on the telephone, as can the image of a thermometer to give people unfamiliar with scales an idea of what they look like. Graphic scales are particularly useful for respondents with low literacy levels.
Threatening Questions

Studies may touch on issues that are threatening to some or all respondents,for example, topics like sex, alcohol consumption, mental illness, or family planning practices, all of which may be of interest to a marketer. These are touchy issues and hard to phrase in questions. Respondents usually do not wish to reveal to others something private or that they feel may be unusual. Some seemingly innocuous questions may be threatening to some respondents. For example, a man may not wish to reveal that the reason he gives blood regularly is that a nurse at the blood donation center is attractive. Or a housewife may not be anxious to admit she likes to visit the city art gallery so she can get a shopping bag in the gift shop to impress her middleclass neighbors.

There are several approaches to reducing threat levels. One is to assure respondents at the start of the study that they can be as candid and objective as possible since the answers will be held in complete confidence. This point can then be repeated in the introduction to a specific threatening question.

A second approach that tends to ease individuals’ fears of being unusual is to preface the question with a reassuring phrase indicating that unique answers are not unusual for a specific question. Thus, one might begin a question about alcohol consumption as follows: “Now we would like to ask you questions about your alcohol consumption in the past week. Many have reported consuming alcohol at parties and at meals. Others have told us about unusual occasions on which they take a drink of whiskey, wine, or beer, like right after they get out of bed in the morning or just before an important meeting with a coworker they don’t like. Could you tell us about each of the occasions on which you had an alcoholic beverage in the past week, that is, since last [day of the week]?”

Another approach is to use an indirect technique. Respondents may often reveal the truth about themselves when they are caught off-guard, for example, if they think they are not talking about themselves. A questionnaire may ask respondents to talk about “a good friend” or “people in general.” In this case, the assumption is that in the absence of direct information about the behavior or attitudes of others, respondents will bring to bear their own perceptions and experiences.

Finally, the researcher could use so-called in-depth interviewing techniques (mentioned in Chapter Eight). Here, the interviewer tries not to ask intrusive questions. Rather the topic (perhaps alcohol consumption) is introduced, and the respondent is kept talking by such interjections as, “That’s interesting” or “Tell me more.” In the hands of a skilled, supportive interviewer, respondents should eventually dig deeply into their psyches and reveal truths that might be missed or hidden. However, such approaches are very timeconsuming, can be used only with small (and probably unrepresentative) samples, and require expertise that is often unaffordable for low-budget researchers.

Constricting Questions

Respondents may withhold information or not yield enough detail if the questions do not permit it. They may also terminate out of frustration. The questionnaire should almost always include an “other” option, where there is the real possibility that all the possibilities have not been precoded. Multiple choices should be allowed where they are relevant, and people should be able to report that some combination of answers is truly the situation.

Generalization Biases

Bias can often creep into answers by respondents who are asked to generalize about something, particularly their own behavior. For example, neophyte questionnaire writers often ask respondents to indicate their favorite radio station, the weekly newsmagazine they read most often, or how often they exercise each month. The problem is that these questions require the respondents to summarize and make judgments about their own behavior, yet how they make these generalizations will be unknown to the researcher. For example, when asked for a favorite radio station, one person may report a station she listens to while in the car, another may report a favorite station at home, and a third may report one that pleases him most often rather than the one he listens to most frequently.

When asking questions about behavior, it is almost always better to ask about specific past behaviors than have a respondent generalize. Rather than asking about a favorite radio station, a respondent can be asked, “Think back to the last time you had the radio on at home, work, or in the car. What station were you listening to?” In this case, the respondent perceives the task as reporting a fact rather than coming up with a generalization. In such cases, the interviewer is likely to get much more objective, error-free reporting than if consumers are asked to generalize.

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Marketing Research Topics