Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Chapter 3

Decision-Making Under Risk: A Normative


and Behavioral Perspective

Daniel Straub and Isabell Welpe

This chapter introduces the theories of decision-making under uncertainty and risk
of socio-technical systems. Following the historic development of the main concep-
tions of rationality, we start with expected utility theories and explain the rational
choice (or normative) perspective. We explain how decisions under risk can be opti-
mized consistently within the framework of the theory, and under which conditions
such analyses are particularly applicable and when they are reduced to an economic
cost-benefit analysis. It is then discussed why the classic theories are sometimes
misused and why the normative perspective is not suitable to describe or predict
actual human behavior, perception or evaluation of decisions and their outcomes
under uncertainty and risk. We then outline alternative theories of decision-making,
including descriptive approaches from behavioral economics (e.g. cognitive biases)
as well as ecological rationality and heuristic decision making. As is discussed in
this article, the normative approach is suited for optimizing decisions in a consistent
manner for relatively well defined (often technical) problems, whereas the alterna-
tive theories are more suitable to predict actual human and social evaluations and
behavior and can provide improved decision making in complex situations where
socio-technical system parameters as well as the decision maker’s preferences are
not well defined.

Keywords Expected utility · Risk · Decision theory · Behavoral economics

Mathematics Subject Classification (2010) 62C05 · 62C10 · 91B06 · 62P30 ·


62P20

D. Straub
Engineering Risk Analysis Group, Faculty of Civil, Geo and Environmental Engineering,
Technische Universität München, Theresienstr. 90, 80333 Munich, Germany

I. Welpe (B)
Chair for Strategy and Organization, TUM School of Management, Technische Universität
München, Arcisstr. 21, 80798 Munich, Germany
e-mail: [email protected]

C. Klüppelberg et al. (eds.), Risk – A Multidisciplinary Introduction, 63


DOI 10.1007/978-3-319-04486-6_3,
© Springer International Publishing Switzerland 2014
64 D. Straub and I. Welpe

The Facts
• In theories of judgment and decision making one has to distinguish between how
people should make decisions (idealistic, normative approaches) and how people
actually make decisions (realistic, descriptive approaches).
• Normative decision theory assumes that under certain circumstances decision
makers (should) follow a certain set of rules that ensures consistency among deci-
sions as well as optimal decision outcomes. Descriptive decision theory accounts
for the fact that people do not follow these rules and for such situations in which
optimal set of rules cannot be given.
• Normative decision theory is applicable to well defined and contained (often tech-
nical) problems, and can be used to optimize risk levels. A number of tools, in-
cluding decision trees and graphs, exist. It can also be used to optimize the amount
of information that should be collected to reduce uncertainty before making the
decision.
• The utility function describes decision maker’s preferences. It is an empirical
function that can differ between individuals and is influenced by subjective per-
ceptions. No mathematical form of the utility function is justified by some “uni-
versal law”.
• Different from what the classical normative theory would propose, the subjec-
tive, observer-dependent perception of “objective” values and probabilities has a
strong impact on human perceptions, evaluations and decisions. The normative
theory therefore generally fails to accurately recognize, describe or predict actual
decision making under risk and uncertainty.
• When optimization is not possible, people often make good decisions through the
use of heuristics and “gut feelings”.
• Most risks are embedded into socio-technical systems, thus is it advisable to be
familiar with and use both normative and descriptive risk decision theories.
• There is no “fixed formula” for ideal decision making under risk and uncertainty.

1 Introduction
Decision making under conditions of uncertainty and risk is an every-day task.
When deciding whether or not to take the umbrella upon leaving the house, when
deciding on whether or not to wear a helmet for bicycling or when deciding whether
to take the train or the airplane, you are making a decision that involves outcomes
that are uncertain (Will it rain? Will you be hit by a car? Will the train or the plane
be safer?) and that are associated with risks (of catching a cold; of sustaining in-
juries). In our every-day life, we often use intuition (also called heuristics or gut
feeling—see Sect. 3) to make such decisions, which often works well. As profes-
sionals dealing with risk and uncertainty we often have to make complex and far-
reaching decisions or advise the ones that make those decisions, e.g. a committee
of experts in health risk that must make a recommendation on acceptable levels of
air pollution, a team of engineers that must determine the optimal flood protection
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 65

strategy for a city or a team of corporate manager that must weigh the economic
risks against the technical risks in the introduction of new products and technolo-
gies. Even as individuals we frequently must decide between decision alternatives
involving uncertainty on which we have little experience and intuition, for example
as a patient between different treatment options, as we save for retirement, between
different investment strategies or in private life when deciding for or against a life
partner. Decision theory has been developed to describe and model the process of
making such decisions and ideally supports us in identifying the best options.
Decision theory started out by assuming that the outcomes of decisions can
be assessed following a set of consistent decision rules (often—and somewhat
misleadingly—referred to as “rational decision making”). Based on these rules, it
is then possible to mathematically identify optimal decisions under conditions of
uncertainty. Today, this theory is called the normative decision theory, because it is
useful in describing how decisions should ideally be made under some idealistic,
objective and observer-independent assumptions (compare Sect. 3.2), which will be
discussed in this article. When studying the behavior of decision makers, it is ob-
served that people’s assumptions and resulting actions are not consistent with the
assumptions and rules of the normative decision theory. Instead, decisions made by
people are influenced by a number of cognitive, motivational, affective and a number
of other factors that are not addressed by the classical normative theory. Decisions
associated with risk and uncertainty are often concerned with socio-technical sys-
tems of some sort, in which human, social and technical dimensions continuously
interact. In order to understand, model and reduce risk in these anthropogenic sys-
tems, it is necessary to understand how people involved in the process actually per-
ceive, evaluate and decide about risk, which is the aim of descriptive decision theory
that concerns itself with the empirical reality of how people think and decide.
Examples for the application of the normative theory in risk management include
the optimization of decisions on the optimal level of flood protection for a city based
on probabilistic models of future flood events and infrastructure performance, or
decisions on optimal levels of insurance and reinsurance coverage. Examples for
the application of the descriptive theory arise when dealing with processes whose
outcomes substantially depend on the perceptions, evaluations, decisions and inter-
ventions of humans. For example, consumers decide if genetically modified food is
safe for them to buy and eat, or if nuclear energy is an acceptable form of energy
technology.
As described in the above paragraphs, in this chapter we distinguish between the
normative and the descriptive decision theory. Normative decision analysis uses a
mathematical modeling approach based on the expected utility theory (sometimes
also called normative, prescriptive, rational or economical decision analysis) and
provides a framework for analyzing the optimality of decisions when knowledge
of the probability and consequences involved in the decision is available or can
be approximated. Descriptive or behavioral decision analysis supports risk-related
decisions in complex, socio-technical systems that involve uncertainties with regard
to probability and outcomes that make exact quantification difficult. Using either
normative or descriptive decision theory in isolation gives an incomplete assessment
66 D. Straub and I. Welpe

of the realities of the risk situation. Risk management in socio-technical systems and
situations should always consider both normative and descriptive aspects of decision
analysis. Risk managers and decision makers need thus be familiar with different
risk theories and perceptions.
Section 2 of this chapter presents an introduction to the normative theory while
Sect. 3 introduces the descriptive theory. Finally, Sect. 4 concludes with a compari-
son of the main theories with regard to their assumptions, approach, decision criteria
and applicability.

2 Normative Decision Making: Optimal Decision Making Based


on the Expected Utility Criterion

2.1 Mathematical, Technical and Economical Perspective:


The Rational Approach

In many professional situations it is desirable to select the right decision follow-


ing a set of logical and reproducible rules and criteria.1 This holds true in particu-
lar when making decisions in groups, where different verbal arguments have to be
“translated” into numbers and outcomes, when probabilities and outcome can be
sufficiently quantified, and when decisions affect others, as is the case in risk man-
agement of anthropogenic systems (e.g. technical systems, environmental systems
or companies). When authorities prescribe an acceptable level of air pollution, so-
ciety expects that the decision on the value of this level is made on a rational and
consistent basis (i.e. that the decisions are perceived as legitimate), taking into ac-
count all costs and benefits; on the one hand the potential health and environmental
effects and on the other hand the economic costs and benefits of setting stringent
criteria. A main difficulty in making such decisions is that many of the influencing
factors and future outcomes are not and cannot be known with certainty. Neither the
health impact of the pollutants nor the cost of reducing them or the value derived
thereof for people can be precisely quantified.
To identify optimal decisions in situations when outcomes are uncertain is the
goal of classical decision analysis, which has its foundation as a scientific discipline
in the publication of the book by Von Neumann and Morgenstern [49] on utility.
It is worthwhile noting that although their work is entitled “Theory of games and
economical behaviour”, it is written by mathematicians and not by empirical scien-
tists. Classical decision analysis is based on the premise that outcomes are uncertain

1 We note that at least two reasons for this preference can be distinguished: (1) Rules and numbers
allow for an “objective” and “true” assessment of risks, probabilities and outcomes. (2) In social
interactions, the legitimacy and acceptability of decisions is increased by justifying them through
the use of (sometimes just seemingly) objective and true assessment of risks, probabilities and
outcomes.
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 67

but that it is possible to quantify their probabilities of occurrence. It furthermore as-


sumes that the preferences of the decision makers follow certain rules that are con-
sidered rational, as described by utility theory introduced in Sect. 2.3. According to
these rules, decisions should not be influenced by any factors that are considered
irrelevant for the outcome, in particular not by the context in which gains or losses
occur. Despite of (or even because of) these idealized assumptions, classical deci-
sion theory provides a useful framework for analyzing decisions involving risk in
and quantifying outcomes and probabilities and in describing how decisions should
be made in an ideal world. This theory makes it possible to set up consistent (i.e.
reproducible and comparable) criteria for making decisions, which is often relevant
when decisions need to be justified in social contexts and affect a larger group, as is
commonly the case in a socioeconomically or technical context.
In short, the classical decision theory provides a rationale for identifying the
decisions and actions that should be taken under conditions of uncertainty and risk.
For this reason it is often termed the normative or prescriptive approach. Because
it also forms the basis for classical economic theory, it is also often referred to
as economic decision theory. Hereafter, we will generally use the term normative
decision theory.

2.2 System Model, Decisions and Utility

Normative decision analysis requires a model of the relevant system and time frame,
the identification of possible decision alternatives and the probabilities and out-
comes as well as a measure for evaluating the optimality of the decision alternatives.
For engineering problems, the relevant system is typically represented by physical,
chemical and/or logical models with input and output variables, some of which are
uncertain. In deference to the literature on decision analysis, we will represent the
system by a vector of random variables . Often,  is referred to as “state of
nature”. As an example, consider the problem of determining the optimal flood pro-
tection for a city. Here,  might represent the future maximum water height and
discharge of the river, as well as the future land use in the areas at risk.
The decision alternatives can be separated into decisions on actions and decisions
on gathering further information. The former, which we will denote by a, actively
change the state of the system as represented by . As an example, the decision on
building a dam upstream will change the probability of a flooding of the city or the
decision on allowing no building close to the river will alter the damage in the case
of a flood. On the other hand, decisions on gathering further information, denoted
by e, will not change the state of the system. Upon obtaining the information, our
estimate of the system state may change, however. If, for example, one decides to
perform an extended hydrological study, one will reduce the uncertainty on the es-
timate of the intensity of future flood events and obtain a more accurate estimate of
maximum floods. In the following we will focus on decisions on actions a; deci-
sions on collecting information e are considered in pre-posterior decision analysis
as introduced in Sect. 2.5.
68 D. Straub and I. Welpe

Finally, we must identify the attributes of the system upon which to assess the
optimality of a decision alternative. In the decision on flood protection, these at-
tributes include safety, monetary cost of measures and damages as well as societal
and environmental consequences. For optimization purposes, we translate these at-
tributes into a unique metric that allows comparing the alternatives in a quantitative
manner. This metric is termed utility u and the associated utility theory, outlined in
Sect. 2.3, forms the basis of normative decision analysis

2.3 Utility Theory

The quality of an outcome of a set of decisions on an anthropogenic system is judged


on the basis of a number of attributes. As an example, in a decision analysis on the
management of contaminated sediments, the following attributes were identified,
Kiker et al. [25]:
– monetary cost;
– size of the affected area;
– impact on human health (safety);
– impact on ecological health.
In finding an optimal decision, all attributes must be taken into account. Typi-
cally, a situation arises where one decision alternative is more optimal with respect
to one attribute while another decision alternative is more optimal with respect to
another attribute. Cost and safety are common attributes in risk-related problems,
and in general a trade-off between the two must be made. If safety was the only at-
tribute, then a system should be designed as safe as possible (consider the pyramids
as an example of such a safe structural system). However, it is the art of engineering
to design structures that are not only safe but also economical (as well as functional
and aesthetical).
The motivation for utility theory is the need for a formalism that allows assess-
ing the optimality of decision alternatives such that the preferences of the decision
maker are consistently reflected. Such a formalism enables us to extrapolate from
past behavior to new decision situations, both with respect to the trade-off between
different attributes and the trade-off among different values of the same attribute. To
this end, we define a single metric for measuring the optimality of a decision. This
metric is called utility. Then, all attributes are transformed into utility by a suitable
transformation that consistently reflects the preferences of the decision maker. It is
assumed that this transformation, i.e. the weighing assigned to different attributes,
is constant with time. To introduce the concept, we study the transformation of the
attribute money into utility in the following.
First, we note that the utility function, which transforms attributes into utility,
is a property of the decision maker. Different decision makers will have different
utility functions. In Fig. 1, an exemplarily utility function for an individual is shown.
This utility function is continuously increasing, which appears logical, since almost
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 69

Fig. 1 Utility function for an


individual decision maker,
transforming monetary values
into utility

everybody would prefer more over less money. However, this is not a necessary
condition for the theory; in principle, the utility function can have any arbitrary
shape.
Second, we note that the utility is not linear with money over the entire domain.
The increase in utility associated with a small increase in wealth, i.e. du(w)/dw, is
called marginal utility. Most decision makers have a marginal utility that decreases
with increasing wealth w. (In economics, this is sometimes referred to as the law of
diminishing marginal utility.) In simple words: obtaining two million Euros is not
simply two times more preferable than obtaining one million Euros.
To understand how the exact form of the utility function is derived, we con-
sider the basic principle of utility theory developed by Von Neumann and Morgen-
stern [49]. This principle is that:2
Utility is assigned to the attributes in such a way that a decision (on which action to take)
is preferred over another if, and only if, the expected utility of the former is larger than the
expected utility of the latter.

That is, the utility function is derived to ensure that among different set of deci-
sion alternatives, the preferable one will always result in the higher expected utility,
E[U ]. Expectation is a mathematical operation, which for the case that the utility
depends only on the single random variable θ , is defined as
 ∞ 
E[U ] = u(θ )f (θ )dθ or E[U ] = u(θ )p(θ ) (1)
−∞ all θ

where u(θ ) is the utility as a function of the system state θ and f (θ ) is the probabil-
ity density function (PDF) of  if it is continuous and p(θ ) is the probability mass
function (PMF) of  if it is discrete.
A common way of determining the utility function u(θ ) for monetary values is
to consider a series of decisions on whether or not to accept a bet. In each bet, there

2 For this to hold, a number of consistency requirements must be fulfilled, i.e. the preferences of
the decision maker must fulfill a set of axioms, which, however, are in agreement with what is
commonly considered to be consistent behaviour. As an example, one of the axioms states that the
ordering of the preferences among different outcome events Ei is transitive. Formally, if  means
“preferred to” then transitivity demands that if Ej  Ek and Ek  El then it must also be Ej  El .
For a more formal introduction and the full set of necessary axioms, consult e.g. (Luce and Raiffa
[5], Sect. 2.5).
70 D. Straub and I. Welpe

is a probability of p to win a monetary prize of x1 and a probability of (1 − p) to


loose x0 . For this bet, the expected utility of the two decision alternatives are as
follows:
Decision not to bet, a0 : E[U | a0 ] = u(0),
Decision to bet, a1 : E[U | a1 ] = (1 − p)u(−x0 ) + p · u(x1 ).
E[U | a0 ] stands for: the expected value of U for given decision a0 . If, for particular
values of x0 , x1 and p, the decision maker prefers the decision a0 over the deci-
sion a1 , it must hold that u(0) > (p − 1)u(−x0 ) + p · u(x1 ). If she prefers a1 over
a0 the opposite must hold, and if she is indifferent it is u(0) = (p − 1)u(−x0 ) +
p · u(x1 ). By varying the values of x0 , x1 and p, it is now possible to determine
the value of the utility function for different monetary values, so that it is consistent
with the actual decisions made by the decision maker. You may try to establish your
own utility function by playing such an imaginary game.
(We note that a linear transformation of the utility function does not alter the
ordering of preferences, i.e. with u1 (X) = c + b · u(X) and b and c being constants,
if E[u(X) | a0 ] > E[u(X) | a1 ] it must also hold that E[u1 (X) | a0 ] > E[u1 (X) | a1 ].
For this reason, any linear transformation of the utility function is allowed, which
implies that two points of the utility function can be freely selected.)

2.3.1 Probability

Decision making based on the expected utility theory requires one to assess the
probability of all relevant system outcomes. In practice, these probabilities must
often be estimated by the decision maker on the basis of limited or no data. The
probabilities represent the knowledge of the decision maker at the time of making
the decision, and are therefore subjective values. The problem of assessing these
probabilities in real situation is further addressed in Sect. 3.1 and in Chap. 12, [42].

2.3.2 Risk

In the context of utility theory and normative decision analysis, we will use the
following definition of risk:
Risk is the expected change in utility associated with uncertain, undesirable outcomes.

Following utility theory, decisions are not made based on risk, but on the basis
of the expected utility (of which risk is a part). The optimal decision is the one that
leads to the highest expected utility. It follows that the risk that should optimally be
taken is the risk associated with this decision.

2.3.3 Risk-Aversion

Utility functions are often concave, like the one of Fig. 1, corresponding to dimin-
ishing marginal utility. When considering losses, this can be explained by the fact
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 71

Fig. 2 The utility function


for a small engineering
consultancy versus the utility
function for a large insurance
company

that substantial losses can have consequences that go beyond the direct losses, and
which therefore cannot be compensated by gains elsewhere. As an example, for a
company the loss of 10,000€ is likely to be twice as bad as the loss of 5,000€, but
the loss of 2 Million € can be disproportionally worse than the loss of 1 Million €
if such a loss threatens the liquidity of the company.
Typically, the utility function is linear (or almost linear) within a range that is
small compared to the working capital of the decision maker. This “size effect” is
illustrated in Fig. 2, showing the difference in the utility function of a small ver-
sus a large company. In the considered range, the utility function is linear for the
large company (these sums are “peanuts” for the insurance company), whereas it is
concave for the small company where the loss of one million is a critical event.
A consequence of the concave shape of the utility function is that decision
makers tend to avoid risks. Consider an event A, causing a loss of 105 €, and
an event B, with associated loss 106 €. Assume that the probabilities of these
events are pA = 0.1 and pB = 0.01. The expected monetary loss of both events is
p · Loss = −104 €. Assume that the decision maker is the engineering consultancy
whose utility function is shown in Fig. 2. The utility associated with the losses are
u(−105 €) = −0.09 and u(−106 €) = −2.3, respectively. The expected utility as-
sociated with events A and B (the risks) are E[UA ] = 0.1 · (−0.09) = −0.009 and
E[UB ] = 0.01 · (−2.3) = −0.023. Therefore, although the expected monetary loss
is the same, the risks associated with event B are higher. This effect is commonly
referred to as risk aversion.

Illustration 1 (Why Risk Aversion Motivates Insurance) This illustration is taken


from Straub [6]. Consider the engineering consultancy whose preference is repre-
72 D. Straub and I. Welpe

sented by the utility function in Fig. 2:


 
0.9
u(x) = ln x+1 , [x in €].
106
This company is managing a project that involves considerable risk because of a
penalty in case of a delay. It is estimated that the probability of the event “project
delayed” is p = 5 %, and the penalty associated with that event is 800,000€. The
company is now offered an insurance that, in the event of a delay, covers the penalty
minus a deductible of 80,000€. The premium is 50,000€.
For the engineering consultancy, the expected utility of action a0 , not to buy
insurance, is
 
0.9
E[U | a0 ] = p · u(−800,000 €) = 0.05 · ln (−800,000 €) + 1 = −0.064.
106
The expected utility of action a1 , to buy insurance, is
E[U | a1 ] = p · u(−130,000€) + (1 − p) · u(−50,000€)
   
0.9 0.9
= 0.05 · ln (−130,000 €) + 1 + 0.95 · (−50,000 €) + 1
106 106
= −0.050.
Since it is E[U | a1 ] > E[U | a0 ], the optimal decision for the consultancy is to buy
the insurance.
On the other hand, for the insurance company (whose utility function is u1 (x) =
x/106 ) the optimal action is to sell the insurance, since E[U1 | a0 ] = 0 and E[U1 |
a1 ] = p · u1 (−670,000€) + (1 − p) · u1 (50,000€) = 0.008.
It is important to realize that insurance only makes sense if the insured party has
a different utility function than the insurer. If the engineering company had a linear
utility function, it should not buy the insurance, since the expected utility associated
with that decision would be lower. (It corresponds to computing expected monetary
values.) This linearity holds approximately when losses are small. (You can verify
this yourself by repeating the above calculations for the case where all costs are
reduced by a factor of 10, i.e. when the penalty cost is 80,000€, the premium is
5,000€, and the deductible is 8,000€. You will find that in this case, insurance is
not an optimal strategy for the consultancy.)

The above example illustrates the effect of risk-averse behaviour. A decision


maker is said to be risk-averse whenever his utility function is concave; mathe-
matically this corresponds the utility function having a negative second derivative:
d2 u(w)/dw 2 < 0. This decision maker tries to avert risks, even though this reduces
his expected monetary gains, because it maximizes his expected utility.
Measures for risk aversion have been proposed by economists. The most well
known measure is the coefficient of absolute risk aversion (ARA), introduced by
Arrow and Pratt [32], defined as
u (w)
ARA(w) = −  (2)
u (w)
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 73

Fig. 3 Utility functions with


different absolute risk
aversion (ARA). All utility
functions have been scaled to
give u(−1) = −1 and
u(1) = 1

where u (w) = du(w)/dw is the first derivative and u (w) = d2 u(w)/dw 2 the sec-
ond derivative of the utility function with respect to wealth w. Figure 3 shows sev-
eral utility functions with varying ARA. These are of the form
u(w) = 1 − exp(−cw). (3)
This utility function results in an ARA(w) = c that is constant for all values of w
(you can verify this claim by inserting the utility function in Eq. (2)). For a negative
ARA, the decision maker is said to be risk seeking. This corresponds to a convex
utility function, as exemplified in Fig. 3 by the utility function with ARA = −1.
Alternative measures of risk aversion exist, e.g. the Arrow-Pratt coefficient of
relative risk aversion (RRA):
u (w)
RRA(w) = −w . (4)
u (w)
There is a vast body of literature available investigating these and other measures
of risk aversion (e.g. Menezes and Hanson [30]; Binswanger [11]), most of which
is rather technical. It is, however, important to realize that the utility function is an
empirical function and there is no mathematical form of the utility function that
is justified by some “universal law”. In fact, Rabin [33] shows that already rela-
tively weak assumptions on the form of the utility function, namely the assumption
of diminishing marginal utility for all levels of wealth w, can lead to absurd pre-
dictions when extrapolating from decisions involving small sums to decisions with
large consequences. The reason behind this is that people do not generally behave
consistently according to the expected utility theory, as discussed later in Sect. 3.
This observation does not invalidate the use of expected utility theory, but it points
to the fact that extrapolation of the utility function assuming some underlying math-
ematical form (like the one of Eq. (3)) should not be performed. If this is taken into
consideration, then utility theory (and the measures of risk aversion) provides rules
for optimizing decisions under uncertainty and risk.
74 D. Straub and I. Welpe

2.3.4 Expected Utility Theory vs. Economic Cost-Benefit Analysis

Many decisions involve events with consequences that are small compared to the
“working capital” of the decision maker. This is particularly true if the decision
maker is society or a representative of society, e.g. a governmental body such as
the federal transportation administration. In this case, the utility function will be
linear with respect to monetary values. As we have seen earlier, the ordering of the
expected utility of different decision alternatives is not altered by a linear transfor-
mation of the utility function; we can thus set the utility function equal to mone-
tary values when all consequences are in the linear range of the utility function. In
this case, the decision problem can be reduced to an economic cost-benefit analysis
(Chap. 11, [36]).
Because monetary values are commonly used in society and economics for ex-
changing and comparing the value of different goods and units, decisions are often
assessed based on expected monetary values. However, it is important to be aware
that such an approach is only valid under the conditions stated above (i.e., a linear
utility function in the relevant range of consequences). For example, if the engineer-
ing consultancy in the example above would make its decision based on expected
monetary values, it would decide not to buy the insurance, which would not be op-
timal according to the company’s preferences expressed by the non-linear utility
function.

2.4 Multi-attribute Decision Making

So far we have seen utility functions of a single attribute (wealth), yet in most real-
life problems involving risks, consequences are associated with several attributes
(e.g. economical cost and safety). When multiple attributes are relevant, it becomes
necessary to define joint utility functions of the different attributes. Multi-attribute
utility theory (MAUT) as presented in Keeney and Raiffa [3] is concerned with
decision problems involving multiple attributes.
As an example, consider a decision problem with two attributes X1 and X2 .
A possible joint utility function is constructed from the marginal utility functions
u1 (X1 ) and u2 (X2 ) by
u(X1 , X2 ) = c1 u1 (X1 ) + c2 u2 (X2 ) + c12 u1 (X1 )u2 (X2 ). (5)
In this case, the two attributes X1 and X2 are said to be utility independent. Often,
it is c12 = 0 and the joint utility function reduces to
u(X1 , X2 ) = c1 u1 (X1 ) + c2 u2 (X2 ). (6)
In this case, the two attributes X1 and X2 are said to be additive utility independent.
Once the joint utility function u is established, decision analysis proceeds as in
the case of the single attribute: the optimal decision is identified as the one that leads
to the highest value of the expected utility.
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 75

We do not go further into the details of MAUT, but we note that whenever mul-
tiple attributes are present (and they are so in most decision problems), a joint
utility function is necessary to make consistent decisions. It is important to be
aware of this, because it is sometimes argued that it is unethical to assess at-
tributes such as the health of humans or ecological values by the same metric as
monetary values (in particular if that metric happens to be the monetary value it-
self). These arguments are generally misleading, however. In the end, a decision
is made, which always implies a trade-off between individual attributes. If two de-
signs for a new roadway are possible, one with lower costs and one with lower
environmental impacts, then the final decision made will imply a preference that
weights these two attributes, if only implicitly. In fact, it is possible to deduce an
implicit trade-off from past decisions. Viscusi and Aldy [48] present an overview
on research aimed at estimating the “value of a statistical life” based on soci-
etal decisions and choices, and Lentz [28] demonstrates how such deduced trade-
offs can be used to assess the acceptability of engineering decisions. The prob-
lem with not making these trade-offs explicit is the possibility for making de-
cisions that reflect an inconstant assessment of society’s preferences and which
lead to an inefficient use of resources. An example of such inconsistent decision
making is given by Tengs [44], who compares 185 potential life-saving measures
that are or could be implemented in the United States. She finds that with cur-
rent policies, around 600,000 life years are saved by these measures at a cost of
21 Billion US$ (the numbers are valid for the 1990s). By optimizing the imple-
mented measures using cost-effectiveness criteria, she concludes that with the same
amount around 1,200,000 life years could be saved. It follows that the inefficient
use of resources here leads to a loss of around 600,000 life years (corresponding to
around 15,000 pre-mature deaths each year that could be avoided at no additional
cost).3
The above argument does not discard the benefits of communicating the values
of individual attributes for different decision alternatives. In particular for important
and complex decisions it is strongly advocated that decision makers and stakehold-
ers should be given the information on the effect of their decisions on all the relevant
attributes.

3 We note that, in principle, such a cost-effectiveness analysis does not require us to assign our
preferences, i.e. it is not necessary to make the trade-off between money and safety explicit. The-
oretically it would be sufficient to list the measures according to their effectiveness, as done by
Tengs [44], and then starting from the top of the list select all measures that are affordable. In
practice, however, such an approach is not possible, because these measures are implemented by
different governmental agencies and other actors, who do not make a joint planning. By assigning
an explicit trade-off between safety and cost (i.e. by putting a monetary value to human life), how-
ever, it can be ensured that money is spent optimally even without performing a joint optimization.
Each decision can be tested individually against the criteria set by decision analysis, based on the
joint utility function of life-savings and money (see also Lentz [28]).
76 D. Straub and I. Welpe

2.5 Modeling and Optimizing Decisions with Decision Trees


and Influence Diagrams

Utility theory prescribes that the optimal set of decisions is the one maximizing the
expected utility. Therefore, normative decision analysis essentially corresponds to
computing the expected utility for a given set of decisions a, E[u(a, ) | a], and
then solving the optimization problem:

aopt = arg max E u(a, ) | a . (7)
a
The operator arg maxa reads: the value of the argument that maximizes the expres-
sion on the right hand side. The expectation E[ ] is with respect to the random vari-
ables describing the uncertain system state  = [1 ; . . . ; n ]. It is defined as
 

E u(a, ) | a = ··· u(a, θ )f (θ )dθ1 · · · dθn . (8)
1 n
This is a generalization of Eq. (1) to the case of multiple random variables. Equa-
tion (8) applies to the case where all uncertain quantities  = [1 ; . . . ; n ] are
described by random variables with joint probability density function f (θ ). If all
or some of the random variables are discrete, the corresponding integration opera-
tions in Eq. (8) must be replaced with summation operations.
To represent and model the decisions a and their effect on (expected) utility, deci-
sion trees and influence diagrams have emerged as useful tools. The presentation in
this section is limited to decision problems with given information, i.e. for problems
in which all uncertain quantities are described by known probability distributions
and it is not possible to gather further information. The possibility to collect further
information will be introduced in Sect. 2.6.

2.5.1 Decision Trees

In a decision tree, all decisions a as well as random vectors  describing the states
of the system are modeled sequentially from left to right. Each decision alternative
is shown as a branch in the tree, as is each possible outcome of the random vari-
ables. A generic decision tree is shown in Fig. 4, with only one random variable 
with m outcome states θ1 , . . . , θm . The tree is characterized by the different decision
alternatives a, the system outcomes  described by a probability distribution con-
ditional on a, and the utility u as a function of a and . The decision alternatives as
well as the system outcomes can be defined either in a discrete space, a continuous
space or a combination thereof.
The analysis proceeds from left to right: for each decision alternative ai , the
expected value of the utility is computed following Eq. (8) and the optimal decision
is found according to Eq. (7).

Illustration 2 (Pile Selection) This example, which involves only discrete random
variables and decision alternatives, is due to Benjamin and Cornell [10]. A construc-
tion engineer has to select the length of steel piles at a site where the depth to the
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 77

Table 1 Utility function


State of nature Actions
a1 : Drive 15 m piles a2 : Drive 20 m piles

θ1 : Depth to bedrock is 15 m No loss 5 m of the pile must be


cut off, 100 unit loss
θ2 : Depth to bedrock is 20 m Piles must be spliced and welded No loss
and construction is delayed,
400 unit loss

Fig. 4 Generic decision tree


for the analysis with given
information

bed-rock is uncertain. The engineer has the choice between 15 m and 20 m piles
and the possible states of nature are a 15 m or 20 m depth to the bedrock. The con-
sequences (utility) associated with each combination of decision and system state is
summarized in Table 1.
The probabilities of the different outcomes are p(θ1 ) = 0.7 and p(θ2 ) = 0.3.
The full decision tree for this problem is shown in Fig. 5. The expected utilities for
decisions a1 and a2 are obtained as E[U | a1 ] = 0.7 · 0 + 0.3 · (−400) = −120 and
E[U | a2 ] = 0.7 · (−100) + 0.3 · 0 = −70. Obviously, the optimal decision is to order
the larger piles.

The decision tree grows exponentially with the number of decisions and ran-
dom variables considered, due to the necessary ordering of decisions and random
variables (each decision must be made conditional on the decisions and random
variables to its left, and each random variable is described by a probability distribu-
tion conditional on the decisions and random variables to its left). The decision tree
is thus not convenient for representing decision problems involving more than just
a few parameters. A more efficient and flexible alternative are influence diagrams,
introduced in the following section.
78 D. Straub and I. Welpe

Fig. 5 Decision tree for the


pile selection problem with
given information

Fig. 6 Influence diagram for


a basic decision problem
corresponding to the decision
tree in Fig. 4

2.5.2 Influence Diagrams

As an alternative to decision trees, decision problems can be represented by influ-


ence diagrams. These are more concise representations of the problem, and they are
particularly useful in problems where several decisions have to be considered. They
were first proposed by Howard and Matheson [21].
Influence diagrams are acyclic directed graphs, whose nodes represent random
variables (round nodes), decisions (squared nodes) and utility functions (diamond-
shaped nodes). Directed arrows among the nodes represent the dependence structure
of the problem. Figure 6 shows a generic influence diagram with one decision a and
one random variable . Here it is assumed that  depends on the decision a and
the utility is a function of both a and .
To understand the semantics of the influence diagrams, it is useful to interpret
them as extensions of Bayesian networks (BN) (Jensen and Nielsen [2]). The rules
for dependence among the variables follow directly from the BN, with only a few
additions: in influence diagrams, links have the additional meaning of representing
the flow of information. When making a decision a, the state of the variables that
have links going to the node a are known, as are all the ancestors of those variables.
Consider the example of Fig. 7, which is different from the one in Fig. 6 only in the
direction of the link between a and . This graph implies a completely different
decision problem: because the state of  is known at the time of making the deci-
sion, this represents a decision problem under certainty. A second important rule in
influence diagrams is that for the case of several utility nodes, it is assumed that the
utility functions are additive independent, Eq. (6).
We do not go further into the details of the influence diagrams here, but note
that they can often be constructed from intuition. However, care is needed in en-
suring that the relations among the nodes are consistent with causality and with
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 79

Fig. 7 Alternative influence


diagram for a basic decision
problem. Here, the uncertain
state of the system  is
known at the time of making
the decision a: this is a
decision problem under
certainty

the assumptions regarding independence among variables. Examples for the con-
struction of such models are given e.g. in Jensen and Nielsen [2], Straub [6]. Free
software that allows the construction and computation of influence diagrams (and
Bayesian networks) is available, e.g. the Genie/Smile code that can be downloaded
from http://genie.sis.pitt.edu/.

2.6 Preposterior Decision Analysis (How to Optimize Decisions on


Collecting Information?)

Previously, we have assumed that all information is available at the time of making
the decision and that it is not possible to obtain additional information on the uncer-
tain state of nature . However, in most cases when decisions must be made under
conditions of uncertainty, it is possible to gather additional information to reduce the
uncertainty prior to making the decisions a. As an example, in the decision on flood
protection, it might be possible to perform additional detailed studies to reduce the
uncertainty in estimating damages for given levels of flood. The question that must
be answered is: is it efficient to collect additional information before deciding a? Or
in other words: is the value of the information higher than the cost of obtaining it?
Preposterior decision analysis aims at optimizing decisions on gathering addi-
tional information e, together with decisions on actions a (the letter e is derived
from the word experiment). Typical applications of preposterior decision analysis
are:
– Optimization of monitoring systems and inspection schedules
– Decision on the appropriate level of detailing in an engineering model
– Development of quality control procedures
– Design of experiments
It is important to realize that collecting and analyzing information does not alter
the system. (Exceptions are destructive tests, which sometimes worsen the state of
the system.) For this reason, decisions on gathering information e do not directly
lead to a change in the risk, unlike decisions on actions a. The benefit of e is the
reduction in uncertainty on the system state , which in turn facilitates the selection
of optimal actions a. Preposterior decision analysis allows quantifying this benefit,
the so-called value of information. (The word preposterior derives from the fact
that we calculate in advance (pre-) the effect of information on the model, i.e. the
updating of the prior model with the information to the posterior model.)
80 D. Straub and I. Welpe

The quality of the information obtained by performing e is described by a like-


lihood function L(θ | z) ∝ Pr(Z = z | θ ), which is well known from classical statis-
tics. The change in the probability distribution of the system state  with informa-
tion z is obtained via Bayes’ rule as
f|Z (θ | z) ∝ L(θ | z)f (θ ). (9)
Once the information z is obtained (posterior case), the optimal decisions aopt are
found according to the procedure described in the previous section, whereby f (θ )
is replaced with f|Z (θ | z). Prior to obtaining the information, however, it is nec-
essary to consider all possible outcomes Z to assess the benefit of collecting the
information in the first place.
In preposterior analysis, we jointly optimize the decisions e and a. If additional
information is obtained through e, then the decision on a will be based on that in-
formation. Therefore, it is not reasonable to determine the optimal action a a-priori.
In contrast, it is possible to optimize so-called decision rules d, which determine
which actions a to take based on the type of experiment performed e and the out-
comes of the experiment Z, i.e., a = d(e, z). For example, a decision rule in the case
of a medical test would be to subscribe a treatment if the test results in a positive
indication and do nothing if the test result is negative. The optimization problem in
preposterior analysis can thus be written as


[eopt , dopt ] = arg max E u e, Z, d(e, Z),  | e, d (10)
e,d

where the utility is now a function of the selected experiments e, the outcome of the
experiments Z, the state of the system  and the final actions a, u(e, z, a, θ ), and the
expectation is with respect to the system state  and the experiment outcomes Z.
Details on how to compute the above expectations, as well as on modeling the
information, can be found in the literature, in particular in the classical reference of
Raiffa and Schlaifer [35] and in Straub [43]. Here, we restrict ourselves to presenting
the computations by means of an illustrative example in the following.

Illustration (Pile Selection) We reconsider the pile selection problem introduced


earlier. The engineer is now considering whether or not she should use a simple
sonic test to obtain a better estimate of the depth to the bedrock. A sound wave
created at the surface is reflected at the bedrock and the time between the hammer
blow and reception at the surface is utilized to estimate the depth. The test has three
possible outcomes, namely estimates of 15 m depth, 17.5 m depth and 20 m depth.
The corresponding test likelihoods L(θi | zi ) = Pr(Z = zi |  = θi ) are summarized
in Table 2.
The sonic test e1 comes at a cost, corresponding to the deployment of the test
equipment and the analysis of the test results. This cost is 20 utility units, i.e.
ue (e1 , z) = −20. (The utility associated with different combinations of bedrock
depth and pile lengths are given in Table 1.)
To determine whether the sonic test should be carried out or not, the engineer
carries out a preposterior decision analysis. She summarizes the problem in the form
of a influence diagram, Fig. 8.
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 81

Table 2 Test likelihoods


L(θj | zi ) = Pr(Z = zi | Test result State of nature
 = θj ) θ1 : Depth is 15 m θ2 : Depth is 20 m

z1 : 15 m indication 0.6 0.1


z2 : 17.5 m indication 0.3 0.2
z3 : 20 m indication 0.1 0.7

Fig. 8 Influence diagram for


the pile selection preposterior
analysis

The influence diagram can be implemented in software, since all the relevant
information is provided earlier in the text. For this small example, calculations can
also be performed manually, as illustrated in Straub [6]. The decision not to inspect
leads to an expected utility of −70, as was calculated earlier. The decision to inspect
leads to an expected utility of −60, and is therefore optimal. The reason for this
higher utility is that the test might indicate a lower depth and the smaller pile can be
chosen in this case. Even though this indication is not completely reliable (there is a
probability Pr( = θ2 | Z = z1 ) = 0.07 that the depth is 20 m despite an indication
of 15 m), it is sufficiently accurate to provide a higher expected utility.
The value of information of the test can be computed by comparing the expected
utility with and without the test and subtracting the cost of the test itself. For the
considered sonic test, the value of information is −60 − (−70) − (−20) = 30.

3 Descriptive Decision Making: Decision Making Based on


Empirical Observation

3.1 Challenges and Limitations of Normative Decision Theory

“When the map and the territory don’t agree,


always believe the territory”
Gause and Weinberg [17]—describing Swedish Army Training

Normative decision theory is widely used in economics, mathematics and engineer-


ing, and in many other decision-related sciences. Its strength lies in the quantifica-
tion of probabilities and outcomes, and thus of translating verbal arguments into a
82 D. Straub and I. Welpe

common (mathematical) language making different risks directly comparable. Yet,


this strength of the theory is also the source of its weaknesses. Normative decision
theory struggles when quantification cannot be easily accurately achieved, which is
particularly the case when dealing with many of the more complex challenges and
problems involving risk. In particular those, that involve human and social systems
and their the interaction with technical systems. Moreover, empirical research has
repeatedly demonstrated that by using normative decision theory one cannot accu-
rately predict how people will decide in a given situation.
The following anecdote reported by Gigerenzer [20, p. 62] illustrates how these
two points of criticism often limit the practical usefulness of normative decision
theory in guiding our decision-making. He describes how
A decision theorist from Columbia University struggled with the decision on whether to
accept an alternative offer from another university or whether he should stay at his current
university. His colleague allegedly gave him the following advice: “Just maximize your
expected utility—you always write about doing this”. To which the decision theorist replied.
“Come on, this is serious”.

It sheds a light on the dispute between the different branches of decision theory that
the decision theorist in question, Howard Raiffa, never actually said this, but on the
contrary did decide to move to Harvard using a formal decision analysis to guide his
decision, as he recalls in [34].
Broadly, the limitations of normative decision theory can be divided into the
following two categories:

People Decide Based on Their Subjective and Observer-Dependent Percep-


tions and Observations A main assumption of normative decision theory is
that peoples’ evaluations and decisions are guided by “objective” and “observer-
independent” criteria. However, empirical research has repeatedly shown us that the
same objective characteristics of a situation can be assessed completely differently
by different people (cf. Welpe et al. [50]). Someone might, for example, think that
the probability of 80 % of failing with their entrepreneurial start-up is too high a risk
for them to take, whereas someone else in the same situation might find a 10 % prob-
ability of success to be “a good chance” and “well worth the risk”. In other words,
normative decision theory does not take into account that economic and social eval-
uations and decision are subjectively perceived and thus observer-dependent. Thus,
different utility functions can lead to different “best or optimized decisions” by dif-
ferent individuals in the same situation or with the same information. Whenever
people are part of the decision-making, there is no universal objective reality that
can be quantified and calculated. What does this mean for the empirical study of
risk and uncertainty?

Probabilities and Outcomes Often Cannot be Quantified in Risk Decisions


Economists have in the past studied risk by looking at rather simple economic risk
games (“gambles”), such as the centipede game. This enhances our understanding
of decision-making in situations where probabilities and outcomes are well-known
in advance. It does, however, help us little in understanding the real-life decisions
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 83

of entrepreneurs or politicians, as they are typically not faced with decision situ-
ations in which all different outcomes along with their probabilities are known in
advance. In many situations, decision-makers (regardless of which decision the-
ory is used) are unable to rigorously determine probabilities and outcome values
of all risk-related events in advance (Sect. 2.3.1). Risk managers, entrepreneurs,
decision-makers typically encounter situations that are not entirely mathematically
resolvable, unlike when betting on a number in the Roulette game, where the prob-
abilities of winning and losing as well as the potential pay offs are known in ad-
vance to all players (i.e. decision-makers). This is rarely the case in complex socio-
technical risk problems. This might call into question the usefulness of economic
risk experiments that use gambles to understand risk decision making (Stanton and
Welpe [41]).
Whenever accurate predictions are necessary (e.g. when important issues are at
stake) but impossible, it is advisable better to realize and accept these limitations
instead of falsely relying on alleged and delusive certainty. For some problems, the
issues can be addressed by making a decision analysis and forecast based on the
best available estimates followed by sensitivity analyses. For all problems that are
not sufficiently well understood and the interrelations of the parameters are not well
known, in particular with social and economic systems that are inherently complex,
self-emergent and variable, it is often impossible to accurately predict the future of
such systems. It is advisable to employ several alternative approaches for risk as-
sessment and risk decisions in order to harvest the strengths of multiple approaches
and compensate for their respective limitations.

3.2 Examining the Underlying Assumptions of Normative


Decision Theory

The assumptions of normative decision theory closely resemble and are based on
the well-known (some people think: infamous) “Homo Oeconomicus”. Homo Oe-
conomicus is an artificial model of human perception and decision-making, who
is self-oriented, has preferences that are stable over time and is able to process in-
formation fully and rationally. Following Kirchgässner [26], “Homo Oeconomicus”
lives in an unrealistic world in which all information including probabilities and
outcome values of all choice options are known and freely available without any
transaction costs, which also include the time and energy necessary to search, eval-
uate, contract, and control information and information providers (e.g. Kirchgässner
[26]). The model of “Homo Oeconomicus” makes a number of additional assump-
tions among which are optimality, universality and omniscience (Kurz-Milcke and
Gigerenzer [27]). Here, optimality means that individuals strive for the best possible
solution instead of a solution which is good-enough. Omniscience implies that in-
dividuals have complete information about positive and negative consequences of a
decision. Kurz-Milcke and Gigerenzer [27] further argue: (1) that universality is an
expression of the idea that a common currency or calculus exists which underlies all
84 D. Straub and I. Welpe

decisions, (2) that normative decision theory assumes that humans are always both
willing and cognitively capable of identifying the optimal decision, which would be
one that maximizes according to a certain criterion (e.g. money, happiness), (3) that
individuals (as well as organizations) are fully aware of all existing decision possi-
bilities and their associated costs, benefits and probabilities in the present and future.
Of course, these assumptions are a “mathematical idealization” of reality, and are
not adequate to completely describe the current evaluations, decisions and behaviors
of people, let alone predict their future utilities and actions. The question to ask is
whether a completely accurate description is necessary, use- or helpful for any given
risk management problem.
Previous research has repeatedly shown that the formal conceptualization of ra-
tional decision-makers and the empirically observed human behavior differ sub-
stantially (e.g. Tversky and Kahneman [47]; Kahneman and Tversky [23]). Since
1970, Akerlof [8] has argued that information is typically unevenly shared between
any two transaction partners, resulting in ubiquitous “information asymmetry” as
the rule not the exception. Having full information during a decision process is in
reality impossible. Furthermore, transaction costs exist in virtually all transactions
(Coase [13]). Even if such a world ever existed in which all information is known
and freely available, Simon [38] was one of the first scholars to point out that the
limited cognitive ability of individuals limits the identification of any best option
from several alternatives. People are simply unable to process and evaluate every
alternative in an acceptable time frame.
Ford et al. [16] review 45 studies that investigate the outcomes of decision-
making and shows that humans often use heuristics instead of weighing pros and
cons as normative decision theory would predict. They conclude with the statement
that “the results conclusively demonstrate that non-compensatory4 strategies were
the dominant mode used by decision makers. Compensatory strategies (i.e. trad-
ing off good and bad aspects of two competing alternatives—parentheses added by
Straub and Welpe) were typically used only when the number of alternatives and
dimensions were small or after a number of alternatives have been eliminated from
consideration”.

3.3 Behavioral Decision-Making Theories

The following section introduces two theories in decision making that address
the limitations of the classical theory for descriptive decision analysis, namely,
(a) prospect theory that emphasizes the limitations, cognitive and affective biases
of human decision making and (b) the approach of ecological rationality that em-
phasizes the human ability to make correct decisions under limited time and infor-
mation through the use of heuristics and “gut feeling”. The goal of this section is

4 Heuristics are an example of a non-compensatory strategy.


3 Decision-Making Under Risk: A Normative and Behavioral Perspective 85

to illustrate that human decision-making is inevitably influenced by a great number


of biases, emotional and cognitive influencing factors which are difficult to foresee
and quantify and which sometimes benefit and sometimes deteriorate the outcomes
of human decisions.

3.3.1 Prospect Theory

Prospect theory was introduced by Kahneman and Tversky [23]. They were awarded
the Nobel Prize in Economic Sciences in 2002 “for having integrated insights from
psychological research into economic science, especially concerning human judg-
ment and decision-making under uncertainty” (Royal Swedish Academy of Sci-
ences [46, p. 1]). Their work integrates normative decision theory with insights
from behavioral sciences and cognitive psychology. Furthermore, they introduced
experiments as an innovative methodology for economics in their research. These
developments have laid the foundation for a new field of research called behavioral
economics, which has been the starting point of a paradigmatic shift in the study
of human decision-making under risk. In contrast to expected utility theory, which
is considered to be a prescriptive and normative theory, prospect theory is a de-
scriptive theory of human behavior in decision making under risk constituting an
extension of the normative expected utility theory.
One of the main contributions of prospect theory is in its explicit consideration
and inclusion of the observer-dependent perceptions of utility and in the subjective
weighting of outcome probabilities. An important aspect economists have previ-
ously overlooked (some continue to overlook it) is that human preferences with
regard to seemingly “objective facts” are highly context-dependent and can conse-
quently show a great deal of inter-individual differences. To illustrate this further:
a glass of water can be worth a few pennies if you are sitting at home and are not
thirsty and it can be worth a million dollars if you are alone in the desert, close to
dying of thirst. This seemingly trivial example illustrates a central point. Standard
economic theory uses normative decision theory, which has not found a way yet to
incorporate how individuals perceive, evaluate, weigh and judge objective proba-
bilities, risks, outcomes, costs and benefits depending on the context and their sub-
jective mental states. Even though these observations and deliberations are hardly
surprising to social scientists, especially psychologists, and probably also to the av-
erage lay person, they had a great impact on economists and economic theory, for
reasons outlines in Sects. 3.1 and 3.2.
Kahneman, Tversky and colleagues empirically investigate the value function of
individuals, in which the loss curve has a steeper decline than the gain curve based
on the person’s respective reference point or status quo. A main finding of prospect
theory (e.g. [23]) shows that people react more sensitively to any losses (i.e. changes
below their individual status quo on the value function) than to gains (i.e. changes
above the individual status quo), even when the resulting value of the outcome is
the same (so that normative decision theory predicts the same utility).
86 D. Straub and I. Welpe

Furthermore, this line of research has consistently shown that the subjective per-
ceptions of objectively equal risk alternatives can vary because of different wording
and phrasing of the decision alternatives. The most basic example of this kind would
be to describe a glass, which only contains half of its content as 50 % empty ver-
sus 50 % filled. Scholars (e.g. Tversky and Kahneman [7]; Levin and Chapman
[29]) have repeatedly demonstrated in numerous experiments that individuals’ pref-
erences change simply due to a different wording, the so called “framing” alone.

Illustration (The Framing Effect in an Example by Messick and Bazerman [31,


p. 13])
Situation 1: A large car manufacturer has recently been hit with a number of eco-
nomic difficulties. It appears that it needs to close three plants and lay off 6,000
employees. The vice president of production, who has been exploring alternative
ways to avoid the crisis, has developed two plans:
Plan A: Will save one of three plants and 2,000 jobs
Plan B: Has a one-third probability of saving all three plants and all 6,000 jobs,
but has a two-thirds probability of saving no plants and no jobs
Situation 2: Same situation as in situation 1, but two different plans
Plan C: Will result in the loss of two plants and 4,000 jobs
Plan D: Has a two-thirds probability of resulting in the loss of all three plants and
all 6,000 jobs, but has a one-third probability of losing no plants and no jobs
Empirical studies show that most executives choose plan A in situation 1, but
plan D in situation 2, despite of Plan A and C being equivalent and Plan B and D
being equivalent. This example shows: when the glass is described as half-full it
is more attractive than when it is described as half-empty. Messick and Bazerman
[31] explain this result by the fact that the reference point of the decision-makers is
a different one: in the first case, the reference point is the good situation where all
plants are OK; in the second case, the reference point is the bad situation, namely
the one where all plants must be shut-down. The typical pattern of responses is
consistent with the general tendency to be risk averse with gains and risk seeking
with losses. If the problem is framed in terms of saving jobs and plants (plans A
and B) executives tend to avoid the risk and take the sure plan. If the problem is
framed in terms of losing jobs and plants (plans C and D) executives tend to seek
the risk and not to take the sure plan.

Kahneman, Knetsch, and Thaler [24] argue that loss aversion described in
prospect theory influences decision processes in that humans are generally more
negative about potential losses (risks) than they are positive about possible gains
(opportunities). Related to prospect theory, Kahneman et al. [24] have identified a
number of additional cognitive biases and so-called irrational “anomalies” with re-
gard to human decision-making. For instance, the status quo bias or the endowment
bias (Samuelson and Zeckhauser [37], Kahneman et al. [24]).
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 87

The endowment bias effect is closely associated with loss aversion (Thaler [45])
and is salient when a loss of any asset weighs much higher in the decision-making
than a win of an asset with the same size and value would. The decisive aspect
with the endowment effect stems from the ownership of an object. Research on
the endowment effect shows that assets are valued more highly when they are in
the possession of the decision-maker than when they are not. Again, this finding
confirms that subjective perceptions of seemingly objective characteristics are more
important when describing and predicting human decision-making. In a similar way,
the status quo bias describes the tendency of individuals to prefer the status quo
over taking chances and risks in decision making (Samuelson and Zeckhauser [37],
Fernandez and Rodrik [14]).
According to status quo bias theory, consumer choices depend on which option
is framed as the default (i.e. status quo) option. Kahneman et al. [24] have sug-
gested that the status quo bias is the result of a combination of loss aversion and
the endowment effect. For politicians, management executives and anyone manag-
ing risk-related challenges, the status quo bias means that thinking about what will
constitute the “default” in the organization or decision processes will greatly influ-
ence which decisions will be taken. An example for a risk-related default would be
an organizational rule such as “safety first—when in doubt do what is best for the
safety of our products and not what is best from an economic perceptive”.

3.4 Ecological Rationality and Heuristic Decision Making

The previous sections have dealt with the abilities and inabilities of humans to opti-
mize decisions and make full use of all information available. More often than not,
individuals have to make decisions under limited time and information, which rules
out the application of any analytic decision making procedure to determine an “op-
timal” decision. How do people decide in situations like this? To illustrate this, we
first consider an example.
Gigerenzer [20] gives an example that mirrors the different theories and ap-
proaches of decision making humans can use: the problem of catching a ball fly-
ing in the air in baseball. One could approach this problem by calculating all the
probabilities and utilities or one could use a simple heuristic to catch the ball. It is
impossible for humans to know all necessary parameters of the flight of the ball to
correctly calculate the “parabolic trajectories”, i.e. the “ball’s initial distance, ve-
locity, wind strength and projection angle” necessary to catch the ball. All of these
parameters would need to be assessed and calculated in the short time while the ball
is in the air. As the calculation of these parameters is impossible, Gigerenzer [20]
suggests, the use of so-called “heuristics”, in this case the gaze heuristics to accom-
plish the task of catching the ball. The gaze heuristic works in the following way:
a player fixates the ball and starts running and adjusts his or her speed of running in
an extent that allows him or her to keep the angle of his or her gaze constant. The
player will probably be unable to know or “calculate” where exactly the ball will
88 D. Straub and I. Welpe

touch the ground, but more importantly, keeping the angle between his or her eyes
and the ball constant the player will be at the spot where the ball lands. The gaze
heuristic is a well-known example of a fast and frugal heuristic. It is called fast, be-
cause the heuristic can address problems within matters of seconds, and it is called
frugal because it requires little information to work accurately.
Descriptive (and behavioral) decision theory generally agrees that the human in-
formation processing capacity is limited, for example through cognitive and affec-
tive biases, which make human decision making in general—including heuristic
decision making—sub-optimal. In contract, the heuristics approach as pointed out
by Gigerenzer and colleagues takes an evolutionary perspective—and argues that
such “fast and frugal heuristics” have emerged as a result of human evolution in
order to facilitate good decision-making under limited information and time.
Gigerenzer [18, 19] and colleagues are also critical of behavioral economics for a
number of points. First, with regard to biases (see Sect. 3.3) they argue that these are
“first-best solutions” and “environmental adjustments” of human decision making
resulting from long evolutionary processes. In contrast to behavioral economists, he
does not categorize heuristic decision making or so called “irrationalities” in deci-
sion making in any negative way as “errors” or “second best solutions”. They argue
that calculating probabilities is much more difficult to accomplish for humans than
understanding frequencies (Gigerenzer [18]). Their basic argument is that bounded
rationality as introduced by Herbert Simon and what he calls effective “ecological
rationality” (i.e. heuristic decision making) do not contradict each other and in fact
often co-exist together closely (Gigerenzer and Goldstein [1], Gigerenzer [20]). The
original thinking behind this idea is that heuristic decision making, i.e. decision-
making that is not based on an exact number or their calculations, is more efficient
than decision making based on classic utility maximization. In other words, heuris-
tics are particularly efficient in situations with limited information and time for de-
cision making were mathematical optimization is impossible, which is regularly the
case for decisions in managerial or political (and also personal) decisions. Heuris-
tics, nevertheless, need to constantly be adapted to fit the contexts in which they are
applied in as no heuristic is effective or useful in all decision situations.
In the following, we present examples for heuristics, namely the representative-
ness heuristic, the availability heuristic and the affective heuristic.
The representativeness heuristics refers to judgments of probabilities of a future
event or the representativeness of a sample. In other words, it describes individuals’
subjective assessment of probabilities based on the comparison of previous experi-
ences with events or individuals that represent a current event or sample. Particularly
important is the subjectively perceived similarity, which can lead to misjudgments
because the more individuals perceive events to be similar the more they are likely
to ignore important information and previous probabilities about a current situation
or sample.
Another important heuristic is the availability heuristic, which refers to the eval-
uation of the probability of events based on one’s own previous experiences and
memories, which can be easily recalled. The more easily they are recalled, the higher
individuals evaluate the likelihood of similar current events (Kahneman and Tversky
[22]).
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 89

A number of recent approaches focus on the role of affect in risk perception


(Loewenstein et al. [4]). The “risk-as-feeling” hypothesis (Slovic et al. [40], Slovic
and Peters [39]) implies that affects are important determinants for risk perception
and evaluation. Loewenstein et al. [4] argue that individuals perceive risk depend-
ing on their emotions. Researchers have repeatedly shown that emotions have the
potential to influence human decisions through human information processing of
the perceived risk. For example, Finucane et al. [15] and colleagues showed that
people use affective cues in decision situations under risk. A potential implication
of the risk-as-feeling hypothesis would be that positive affect could lead to a biased
estimation of risk perception and evaluation.

4 Discussion
All models are wrong, but some are useful.5

This chapter has outlined a number of different decision theories, all of which have
their merits and their limitations. The choice of the theoretical approach must thus
be problem dependent, as emphasized throughout the text. Table 3 summarizes the
three main decision theories presented in this chapter.
The classical decision theory is relatively far refined and current research in this
area focuses mostly on computational aspects of the optimization problem in vari-
ous fields of application. There are, however, some alternative novel developments,
which address the difficulty in realistically assessing probabilities in real decision
situations. One example is the info-gap theory, which is developed to provide ro-
bust decisions on a non-probabilistic basis (Ben-Haim [9]). The descriptive and the
heuristic theories, due to their empirical nature and shorter history, seem wide open
for development and adaptation. In addition, there is ample potential for research
on the application of both lines of decision theory to practical problems involving
risk. Real decisions (be it in business, technology, politics or other fields) are sel-
dom based on rigorous applications of decision theory, be it normative, descriptive
or heuristic. One reason for this lack is the gap between researchers living in an
“idealized world” and the practitioners dealing with the “dirty reality”.
Concerning the different lines of decision theory, researchers should aim to link
the formalism of classical utility analysis with the empirical appropriateness of
descriptive and behavioral models. In order to understand and improve decision
making on systemic and complex risks, an integrative perspective of normative, de-
scriptive and heuristic decision making may offer many benefits. Another promising
area for future research would be to study the normative and behavioral perspectives
looking at group decisions as opposed to individual decisions. Furthermore, scholars
may want to examine which institutions (rules, regulations, etc.) can be successfully
implemented in order to enhance the effectiveness and efficiency of individual and
group decisions (e.g. debiasing strategies).

5 Quoted from the statisticians Box and Draper [12].


90 D. Straub and I. Welpe

Table 3 Overview on the three decision theories presented in this chapter


Decision- Approach Decision criterion Suitable Tools
theory for/applicable to

Classical Normative (how Expected utility Optimizing Expected


(normative) decisions should (reflecting attributes decision-making utility
decision be made), such as money, safety, when problems are maximization.
theory, mathematical, happiness); well-defined, i.e. Decision trees
expected axiomatic theory objective/observer- when probability and and influence
utility independent; consequences can be diagram,
theory consistent rules; reasonably quantified. Mathematical
sometimes reduced to Sufficient time for optimization,
cost-benefit analysis calculations is Advanced
available. probabilistic
Important to reduce models
risks related to
technological and
environmental hazards
Descriptive Descriptive (how The aim of descriptive Describing (and Empirical
(descriptive, decisions are decision theory is to predicting) actual analyses (e.g.
behavioral) made) describe what people human behavior experiments or
approaches Empirical, i.e. will actually do, not Understanding how questionnaire
of human fitted to observed necessarily what they people actually make studies) to
decision human behavior should do. According decisions (important describe actual
making (behavioral to prospect theory, to reduce risks decision
economics, e.g. individuals compare associated with behavior
prospect theory) decision criteria human and
(objective and organizational
subjective) against a behavior)
reference point
Heuristic Descriptive (how Subjective/observer- Optimizing decision Use of
decision decisions are dependent cost-benefit making under certain decision
making made) analysis conditions (little time heuristics (e.g.
Empirical Utility (money, safety and limited representative-
Normative happiness) information) and ness heuristic;
elements (decision within complex cause and
heuristics in systems result;
certain situations) availability
Assumption: heuristic;
Decision makers heuristic)
have intuition on
the problem

5 Food for Thought

• What is the value of economics and classical utility theory given that they make
a number of often unrealistic assumptions? Where can they and where can they
not create value added by applying them?
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 91

• It has been said, that all models are wrong to some degree—is there a point how-
ever, where a model becomes “too wrong” or “right enough”—if so, how would
one know?
• How can economic theory account for the role of subjective perception of “ob-
jective” values and probabilities in human decision making?
• What is the value of information and how can it be assessed?
• How does one (theoretically) construct a utility function for a decision maker,
following the classical utility theory?
• From two engineering designs for a tunnel construction, which only differ in
safety and cost, one is selected. How can the implicit trade-off between safety
and risk be deduced from this solution?
• It has been argued that by not following the expected utility principle when mak-
ing decisions involving life safety, “we are in effect killing people”. Discuss this
statement.
• A popular “economics joke”: what do economists mean when they write in the
conclusion of their paper: “The evidence for our hypotheses is mixed?” It means
that economic theory supports the hypotheses but the empirical data does not.
Discuss.

6 Summary
Classical normative decision analysis, which is based on the expected utility the-
ory developed by mathematicians, provides an axiomatic framework for optimiz-
ing decisions under uncertainties. It is well suited for identifying optimal decisions
when copying with risks if probabilities and consequences of adverse events can be
reasonably well quantified. Descriptive decision analysis is a generalization of the
expected utility theory, accounting for the influence of psychological factors on the
decisions made. It is better suited than the classical theory to describe the behavior
of humans under uncertainty and risk. Finally, the chapter outlines newer attempts
to formalize heuristic decision making, which is based on relatively simple rules,
and which assume that these heuristics have developed in an evolutionary process.
These theories are particularly well suited to describe (and sometimes optimize)
decision making under uncertainty and limited time and information.

References

Selected Bibliography

1. G. Gigerenzer, D.G. Goldstein, Reasoning the fast and frugal way: models of bounded ratio-
nality. Psychol. Rev. 103, 650–669 (1996)
2. F.V. Jensen, T.D. Nielsen, Bayesian Networks and Decision Graphs. Information Science and
Statistics (Springer, New York, 2007)
92 D. Straub and I. Welpe

3. R.L. Keeney, H. Raiffa, Decisions with Multiple Objectives (Wiley, New York, 1976).
Reprinted by Cambridge University Press, 1993
4. G.F. Loewenstein, E.U. Weber, C.K. Hsee, N. Welch, Risk as feelings. Psychol. Bull. 127,
267–286 (2001)
5. R.D. Luce, H. Raiffa, Games and Decisions: Introduction and Critical Survey (Wiley, New
York, 1957)
6. D. Straub, Lecture notes in engineering risk analysis. TU München (2011)
7. A. Tversky, D. Kahneman, The framing of decisions and the psychology of choice. Science
211, 453–458 (1981)

Additional Literature and Sources

8. G. Akerlof, The market for ‘lemons’: quality uncertainty and the market mechanism. Q. J.
Econ. 84, 488–500 (1970)
9. Y. Ben-Haim, Info-Gap Decision Theory: Decisions Under Severe Uncertainty (Academic
Press, San Diego, 2006)
10. J.R. Benjamin, C.A. Cornell, Probability, Statistics, and Decision for Civil Engineers
(McGraw-Hill, New York, 1970)
11. H.P. Binswanger, Attitudes toward risk: experimental measurement in rural India. Am. J.
Agric. Econ. 62, 395–407 (1980)
12. G.E. Box, N.R. Draper, Empirical Model-Building and Response Surfaces. Wiley Series in
Probability and Statistics (1987)
13. R. Coase, The nature of the firm. Economica 4, 386–405 (1937)
14. R. Fernandez, D. Rodrik, Resistance to reform: status quo bias in the presence of individual-
specific uncertainty. Am. Econ. Rev. 81, 1146–1155 (1991)
15. M. Finucane, A. Alhakami, P. Slovic, S.M. Johnson, The affect heuristic in judgments of risks
and benefits. J. Behav. Decis. Mak. 13, 1–17 (2000)
16. J.K. Ford, N. Schmitt, S.L. Schechtman, B.M. Hults, M.L. Doherty, Process tracing methods:
contributions, problems, and neglected research questions. Org. Behav. Hum. Decis. 43, 75–
117 (1989)
17. D.C. Gause, G.M. Weinberg, Exploring Requirements: Quality Before Design (Dorset House,
New York, 1989)
18. G. Gigerenzer, From tools to theories: a heuristic of discovery in cognitive psychology. Psy-
chol. Rev. 98, 254–267 (1991)
19. G. Gigerenzer, On narrow norms and vague heuristics: a reply to Kahneman and Tversky.
Psychol. Rev. 103, 592–596 (1996)
20. G. Gigerenzer, Fast and frugal heuristics: the tools of bounded rationality, in Blackwell Hand-
book of Judgment and Decision Making, ed. by D. Koehler, N. Harvey (Blackwell, Malden,
2006), pp. 62–88
21. R. Howard, J. Matheson, Influence diagrams, in The Principles and Applications of Decision
Analysis, Vol. II. (Strategic Decisions Group, Menlo Park, 1981). Published again in: Decis.
Anal. 2, 127–143 (2005)
22. D. Kahneman, A. Tversky, On the psychology of prediction. Psychol. Rev. 80, 237–251
(1973)
23. D. Kahneman, A. Tversky, Prospect theory: an analysis of decision under risk. Econometrica
47, 263–292 (1979)
24. D. Kahneman, J.L. Knetsch, R.H. Thaler, Anomalies: the endowment effect, loss aversion,
and status quo bias. J. Econ. Perspect. 5, 193–206 (1991)
25. G.A. Kiker et al., Application of multicriteria decision analysis in environmental decision
making. Integr. Environ. Assess. Manag. 1, 95–108 (2005)
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 93

26. G. Kirchgässner, Homo Oeconomicus: The Economic Model of Behaviour and Its Applica-
tions in Economics and Other Social Sciences (Springer, Berlin, 2008)
27. E. Kurz-Milcke, G. Gigerenzer, Heuristic decision making. Mark. J. Res. Manag. 3, 48–56
(2007)
28. A. Lentz, Acceptability of civil engineering decisions involving human consequences. PhD
thesis, TU München, Germany (2007)
29. I.P. Levin, D.P. Chapman, Risk taking, frame of reference, and characterization of victim
groups in AIDS treatment decisions. J. Exp. Soc. Psychol. 26, 421–434 (1990)
30. C.F. Menezes, D.L. Hanson, On the theory of risk aversion. Int. Econ. Rev. 11, 481–487
(1970)
31. D.M. Messick, M.H. Bazerman, Ethical leadership and the psychology of decision making.
MIT Sloan Manag. Rev. 37, 9–22 (1996)
32. J.W. Pratt, Risk aversion in the small and in the large. Econometrica 32, 122–136 (1964)
33. M. Rabin, Risk aversion and expected-utility theory: a calibration theorem. Econometrica 68,
1281–1292 (2000)
34. H. Raiffa, Decision analysis: a personal account of how it got started and evolved. Oper. Res.
50, 179–185 (2002)
35. H. Raiffa, R. Schlaifer, Applied Statistical Decision Theory (Cambridge University Press,
Cambridge, 1961)
36. J. Roosen, Cost-benefit analysis, in Risk – A Multidisciplinary Introduction, ed. by C. Klüp-
pelberg, D. Straub, I. Welpe (2014)
37. W. Samuelson, R. Zeckhauser, Status quo bias in decision making. J. Risk Uncertain. 1, 7–59
(1988)
38. H. Simon (ed.), Models of Man: Social and Rational (Wiley, New York, 1957)
39. P. Slovic, E. Peters, Risk perception and affect. Curr. Dir. Psychol. Sci. 15, 322–325 (2006)
40. P. Slovic, M. Finucane, E. Peters, D.G. MacGregor, Risk as analysis and risk as feelings: some
thoughts about affect, reason, risk, and rationality. Risk Anal. 24, 1–12 (2004)
41. A.A. Stanton, I.M. Welpe, Risk and ambiguity: entrepreneurial research from the perspective
of economics, in Neuroeconomics and the Firm, ed. by A.A. Stanton, M. Day, I.M. Welpe
(Edward Elgar, Cheltenham, 2010), pp. 29–49
42. D. Straub, Engineering risk assessment, in Risk – A Multidisciplinary Introduction, ed. by
C. Klüppelberg, D. Straub, I. Welpe (2014)
43. D. Straub, Value of information analysis with structural reliability methods. Struct. Saf.
(2014). doi:10.1016/j.strusafe.2013.08.006
44. T.O. Tengs, Dying too soon: how cost-effectiveness analysis can save lifes. NCPA Policy
Report #204, National Center for Policy Analysis, Dallas (1997)
45. R. Thaler, Toward a positive theory of consumer choice. J. Econ. Behav. Organ. 1, 39–60
(1980)
46. The Royal Swedish Academy of Sciences. Press release, advanced information on the prize
in economic sciences 2002, 17 December 2002 (retrieved 28 August 2011). http://www.
nobelprize.org/nobel_prizes/economics/laureates/2002/ecoadv02.pdf
47. A. Tversky, D. Kahneman, Judgment under uncertainty: heuristics and biases. Science 185,
1124–1131 (1974)
48. W.K. Viscusi, J.E. Aldy, The value of a statistical life: a critical review of market estimates
throughout the world. J. Risk Uncertain. 27, 5–76 (2003)
49. J. von Neumann, O. Morgenstern, Theory of Games and Economical Behaviour (Princeton
University Press, Princeton, 1944)
50. I.M. Welpe, M. Spörrle, D. Grichnik, T. Michl, D. Audretsch, Emotions and opportunities:
the interplay of opportunity evaluation, fear, joy, and anger as antecedent of entrepreneurial
exploitation. Entrep. Theory Pract. 36, 1–28 (2012)

You might also like