What is Decision Making - Principles of Management

The strategic planning system we have reviewed in this chapter is an example of a rational decision-making model. In essence, strategic planning is a formal process for making important decisions about strategies, tactics, and operations. More generally, making decisions is a major component of a manager’s job. Strategic planning systems are a subset of what is often referred to as the classic rational model of decision making.

Strategic planning systems

THE RATIONAL DECISION-MAKING MODEL

The rational decision-making model has a number of discrete steps. First, managers have to identify the problem to be solved by a decision. Problems often arise when there is a gap between the desired state and the current state. For example, if a firm is not attaining its goals for profitability and growth, the gap signifies a problem. Second, managers must identify decision criteria, which are the standards used to guide judgments about which course of action to pursue.

Imagine, for example, that a manager has to decide what model of car to purchase for a company fleet. The decision criteria might include cost, fuel efficiency, reliability, performance, and styling. Third, managers need to weight the criteria by their importance. The weighting should be driven by the overall goals of the organization. Thus for an organization that is trying to reduce costs, a manager choosing cars for a company fleet would probably weight fuel efficiency higher than styling or power.

Fourth, managers need to generate alternative courses of action. In the example used here, this would mean specifying the different models of car that fall into the feasible set. Fifth, managers need to compare the alternatives against the weighted criteria, and choose one alternative.

Sixth, they should implement that choice (for example, issue a purchase order to buy cars). Finally, after a suitable period they should always evaluate the outcome and decide whether the choice was a goodone. If the outcome does not meet expectations, this constitutes a problem that triggers another round of decision making.

BOUNDED RATIONALITY AND SATISFICING

The rational decision-making model is reasonable except for one problems: The implicit assumption that human decision makers are rational is not valid. This point was made forcibly by Nobel Prize winner Herbert Simon. According to Simon, human beings are not rational calculating machines. Our rationality is bounded by our own limited cognitive capabilities.

Bounded rationality refers to limits in our ability to formulate complex problems, to gather and process the information necessary for solving those problems, and thus to solve those problems in a rational way. Due to the constraints of bounded rationality, we tend not to optimize, as assumed by the rational decision-making model.

Rather we satisfice, aiming fora satisfactory level of a particular performance variable, rather than its theoretical maximum. For example, instead of trying to maximize profits, the theory of bounded rationality argues that managers will try to attain a satisfactory level of profits.

Satisficing (settling for a good enough solution to a problem) occurs not only because of bounded rationality, but also because of the prohibitive costs of collecting all the information required to identify the optimal solution to a problem—and often because some of the required information is unavailable. For example, identifying the optimal strategy for gaining market share from competitors may require information about consumer preferences; consumer responses to changes in key product variables such as price, quality, and styling; the cost structure, current and future product offerings, and strategy of rivals; and future demand conditions.

Much of this information is costly to gather (data about consumer preferences and responses), private (the cost structure and future product offerings of rivals), and unpredictable (future demand conditions), so managers tend to collect a limited amount of publicly available information and make satisficing decisions based on that.

DECISION-MAKING HEURISTICS AND COGNITIVE BIASES

Cognitive psychologists argue that when making decisions, due to bounded rationality we tend to fall back on decision heuristics , or simple rules of thumb. Decision heuristics can be useful, because they help us make sense out of complex and uncertain situations. An example of a decision-making heuristic is the so-called 80–20 rule, which states that 80 percent of the consequences of a phenomenon stem from 20 percent of the causes.

A common formulation of the 80–20 rule states that 80 percent of a firm’s sales are derived from 20 percent of its products, or that 20 percent of the customers account for 80 percent of sales. Another common formulation often voiced in software companies is that 20 percent of the software programmers produce 80 percent of the code. It is also claimed that 20 percent of criminals produce 80 percent of all crimes, 20 percent of motorists are responsible for 80 percent of accidents, and so on.

36 Managers often use the 80–20 rule to make resource allocation decisions, for example, by focusing sales and service efforts on the 20 percent of customers who are responsible for 80 percent of revenues. Although the 80–20 rule might be verified through empirical measurement, often it is not. People just assume it is true—and there lies the problem: The rule does not always hold. The assumption may be invalid, and decisions made on the basis of this heuristic might be flawed.

Generalizing from this, cognitive psychologists say that as useful as heuristics might be, their application can cause severe and systematic errors in the decision-making process. Cognitive biases are decision-making errors that we are all prone to making and that have been repeatedly verified in laboratory settings or controlled experiments with human decision makers. Due to the operation of these biases, managers with good information may still make bad decisions.

Prior Hypothesis Bias Example

A common cognitive bias is known as the prior hypothesis bias, which refers to the fact that decision makers who have strong prior beliefs about the relationship between two variables tend to make decisions on the basis of these beliefs, even when presented with evidence that their beliefs are wrong.

Moreover, they tend to seek and use information that is consistent with their prior beliefs while ignoring information that contradicts these beliefs. To put this bias in a strategic context, it suggests that a CEO who thinks a certain strategy makes sense might continue to pursue that strategy, despite evidence that it is inappropriate or failing.

Another well-known cognitive bias, escalating commitment, occurs when decision makers, having already committed significant resources to a project, commit even more resources if they receive feedback that the project is failing. This may be an irrational response; a more logical response might be to abandon the project and move on.

Feelings of personal responsibility for a project, along with a desire to recoup their losses, can induce decision makers to stick with a project despite evidence that it is failing. A third bias, reasoning by analogy, involves the use of simple analogies to make sense out of complex problems. The problem with this heuristic is that the analogy may not be valid. A fourth bias, representativeness, is rooted in the tendency to generalize froma small sample or even a single vivid anecdote.

This bias violates the statistical law of large numbers, which says that it is inappropriate to generalize from a small sample, let alone from a single case. In many respects the dot-com boom of the late 1990s was based on reasoning by analogy and representativeness. Prospective entrepreneurs saw some of the early dot-com companies such as Amazon and Yahoo! achieve rapid success, at least judged by some metrics.

Reasoning by analogy from a small sample, they assumed that any dot-com could achieve similar success. Many investors reached similar conclusions. The result was a massive wave of start-ups that jumped onto the Internet in an attempt to capitalize on the perceived opportunities. That the vast majority of these companies subsequently went bankrupt is testament to the fact that the analogy was wrong, and the success of the small sample of early entrants was no guarantee that other dot-coms would succeed.

Another cognitive bias, known as the illusion of control, is the tendency to overestimate one’s ability to control events. General or top managers seem to be particularly prone to this bias: Having risen to the top of an organization, they tend to be overconfident about their ability to succeed. According to Richard Roll, such overconfidence leads to what he has termed the hubris hypothesis of takeovers. 39 Roll asserts that top managers are typically overconfident about their abilities to create value by acquiring another company.

So, they end up making poor acquisition decisions, often paying far too much for the companies they acquire. Servicing the debt taken on to finance such an acquisition makes it all but impossible to make money from the acquisition (the acquisition of Time Warner by AOL, discussed earlier, is a good example of management hubris).

The availability error is yet another common bias. The availability error arises from our predisposition to estimate the probability of an outcome based on how easy the outcome is to imagine. For example, more people seem to fear a plane crash than a car accident, and yet statistically people are far more likely to be killed in a car on the way to the airport than in a plane crash.

They overweight the probability of a plane crash because the outcome is easier to imagine and because plane crashes are more vivid events than car crashes, which affect only small numbers of people at a time. As a result of the availability error, managers might allocate resources to a project, with an easily visualized outcome rather than one that might have a higher return.

Finally, the way a problem or decision is framed can result in the framing bias. In aclassic illustration of framing bias, Tversky and Kahneman give the example of what they call the Asian disease problem. They asked participants in an experiment to imagine that the United States is preparing for the outbreak of an unusual disease from Asia that is expected to kill 600 people. Two programs to combat the disease have been developed. One group of participants was told that the consequences of the programs were as follows:

  • Program A: 200 people will be saved.
  • Program B: There is a one-third probability that 600 people will be saved and a two-thirds probability that no one will be saved.

When the consequences were presented this way, 72 percent of participants preferred program. A second group of participants was given the follow choice:

  • Program C: 400 people will die.
  • Program D: There is a one-third probability that no one will die and a two-thirds probability that 600 people will die.

When the consequences were presented this way, 78 percent of the participants preferred program D. However, programs A and C are the same, as are programs B and D! The point, of course, is that the preferences were shaped by how the problems were framed.

The wrong frames can have significant negative implications for a company. A good example concerns Encyclopedia Britannica, which thought it was in the book business until it found out it was really in the knowledge and information business, which had gone digital. The company’s sales reportedly peaked at around $620 million in 1989 and then fell off sharply as CD-ROM and then Internet-based digital encyclopedias, such as Encarta, took away market share.

Today, after a close brush with bankruptcy, Encyclopedia Britannica survives as a Web-based business, but itattracts far less traffic than Wikipedia, the dominant online encyclopedia.

PROSPECT THEORY

Prospect theory, which was developed by psychologists Daniel Kahneman and Amos Tversky, is a widely cited model that gives an example of how the cognitive biases arising from simple heuristics can influence managerial decision making. Prospect theory has been used to explain the observation that people seem to make decisions that are inconsistent with the rational model. Prospect theory suggests that individuals assign different subjective values to losses and gains of equal magnitude that result from a decision.

According to this theory, when evaluating the potential gains and losses associated with a course of action, people start by establishing a reference point or anchor. The reference point is usually the current situation. Thus if a firm is currently making a return on invested capital of 10 percent, this might be the reference point for a decision that affects this measure of profitability.

However, as just noted when we discussed the framing bias, the reference point can be influenced by how a problem or decision is framed. Prospect theory predicts that decision makers will subjectively overweight the value of potential losses and underweight the value of potential gains relative to their objective, or monetary, value. Put differently, decision makers are loss averse —they avoid actions that have a potential negative outcome.

PROSPECT THEORY

An interesting implication of prospect theory is that if decision makers have incurred significant losses in the past, they become distressed (they assign a subjectively high negative value to those losses); this shifts their reference point, and they tend to make riskier decisions than would otherwise have been the case. In other words, loss averse decision makers try to recoup losses by taking bigger risks—paradoxically they become risk seekers.

This explains a well-documented tendency for gamblers who are losing to place progressively riskier bets. Similarly, investors in the stock market who have lost significant money have been observed trying to recoup their losses by investing in more speculative stocks. For a managerial example, look no further than Enron, the now-bankrupt energy trading company, where the response to mounting losses was increasing pursuit of the risky strategy of trying to hide those losses by shifting them into off–balance sheet entities and engaging in illegal trades to inflate profits.

Had the reference point for Enron been more positive, it seems unlikely that the managers would have taken these risks. Note that prospect theory also explains the phenomenon of escalating commitment we discussed earlier.

GROUPTHINK

Because most decisions are made by groups, the group context within which decisions are made is an important variable in determining whether cognitive biases will adversely affect the strategic decision-making processes. Psychologist Irvin Janis asserts that many groups are characterized by a process known as groupthink and as a result make poor strategic decisions.

Groupthink occurs when a group of decision makers embarks on a course of action without questioning underlying assumptions. Typically a group coalesces around a person or policy. It ignores or filters out information that can be used to question the policy, develops after-the-fact rationalizations for its decisions, and pushes out of the group members who question the policy. Commitment to mission or goals becomes based on an emotional rather than an objective assessment of the “correct” course of action. The consequence can be poor decisions.

It has been said that groupthink may help to explain why organizations often make poor decisions in spite of sophisticated planning processes. Janis traces many historical fiascoes to defective policy making by government leaders who received social support from their ingroupof advisers.

For example, he suggests that President John F. Kennedy’s inner circle suffered from groupthink when the members of this group supported the decision to launch the Bay of Pigs invasion of Cuba in 1961, even though available information showed that it would be an unsuccessful venture (which it was).

Similarly, Janis argues that the decision to escalate the commitment of military forces to Vietnam by the Johnson administration and increase the bombing of North Vietnam, despite the availability of data showing that this probably wouldnot help win the war, was the result of groupthink.

Indeed, when a member of the in-group ofdecision makers, Defense Secretary Robert McNamara, started to express doubts about this policy, he was reportedly asked to leave by the president and resigned. However, despite the emotional appeals of such anecdotes, academic researchers have not found strong evidence in support of groupthink.

IMPROVING DECISION MAKING

The existence of bounded rationality, cognitive biases, and groupthink raises the issue of how to bring critical information into the decision mechanism so that the decisions of managers are more realistic, objective, and based on thorough evaluation of the available data. Scenario planning can be a useful technique for counteracting cognitive biases: The approach forces managers to think through the implications of different assumptions about the future.

As such, it can be an antidote to hubris and the prior hypothesis bias. Two other techniques known to counteract groupthink and cognitive biases are devil’s advocacy and dialectic inquiry. Devil’s advocacy requires the generation of both a plan and a critical analysis of the plan. One member of the decision-making group acts as the devil’s advocate. The purpose of the devil’s advocate is to question assumptions underlying a decision and to highlight all the reasons that might make the proposal unacceptable.

In this way decision makers can be made aware of the possible perils of recommended courses of action. Dialectic inquiry is more complex: It requires the generation of a plan (a thesis) and a counterplan (an antithesis) that reflect plausible but conflicting courses of action. 49 Managerslisten to a debate between advocates of the plan and counterplan and then decide which plan will lead to higher performance.

The purpose of the debate is to reveal problems with definitions, recommended courses of action, and assumptions of both plans. As a result of this exercise, managers can form a new and more encompassing conceptualization of the problem, which becomes the final plan (a synthesis). Dialectic inquiry can promote thinking strategically.

Another technique for countering cognitive biases championed by Daniel Kahneman (of prospect theory fame) is known as the outside view. The outside view requires planners to identify a reference class of analogous past strategic initiatives, determine whether those initiatives succeeded or failed, and evaluate the project at hand against those prior initiatives. According to Kahneman, this technique is particularly useful for countering biases such as the illusion of control (hubris), reasoning by analogy, and representativeness.

Thus, for example, when considering a potential acquisition planners should look at the track record of acquisitions made by other enterprises (the reference class), determine if they succeeded or failed, and objectively evaluate the potential acquisition against that reference class. Kahnemanasserts that such a “reality check” against a large sample of prior events tends to constrain the inherent optimism of planners and produce more realistic assessments and plans.

Finally, decision makers are more likely to run into problems of bounded rationality, and resort to simple decision-making heuristics, when they have too much information to process. A solution to this problem is to reduce the amount of information that managers have to process, giving them more time to focus on critical issues, by delegating routine decision making responsibilities to subordinates. We return to this issue in Chapter when we discuss internal organization structure.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Principles of Management Topics