Decision Problems, Risk and Uncertainty

In this post, an introduction to decision-making under risk and uncertainty is provided. To this end, basic concepts and components of a decision-making problem are explained and illustrated. Preference relations of a decision maker as well as corresponding utility functions are outlined and put into context.

Decision theory but also related game theory can offer perspectives on how a decision maker should act under various circumstances to obtain a maximum of utility, see [7]. The theory can generate answers and other useful information to maximize utility, or, in the case of the entrepreneur, profit.
In particular, modern game theory can offer models that are capable to handle high degrees of uncertainty much better than classical probability or measure theory, see [2].

Overview on the Topic by Itzhak Gilboa

Itzhak Gilboa provides a quite good overview on the subject in the following video, which might serve as an introduction to the topic:

There is also a rather non-technical corresponding free eBook – Theory of Decision under Uncertainty by Itzahk Gilboa available. It explains many concepts also discussed and studied in the present post.

What is Decision Theory and How is it Related to Other Theories?

Decision theory deals with situations in which one or more actors have to make choices among given alternatives. Decision-making is considered to be a cognitive process resulting in the selection of a belief or a course of action among several alternative possibilities. Decision theory is also sometimes called theory of choice.

Decision theory provides a means of handling the uncertainty involved in any decision-making process. If enough information is available, uncertainty with respect to the outcomes might be handled by condensing a probability distribution and maximizing so-called “expected utility”. However, there are also situations, where the degree of uncertainty is too high to come up with a reasonable distribution assumption. Game theory provides theoretical frameworks for modeling both types.

Decision theory is closely related to modern game theory, which itself has a lot of interconnections to so-called theory of capacities (also refer to [1] and [2]). Latter theory is a possible framework that can be applied to situations with a high degree of uncertainty.

Decision theory is also applied in artificial intelligence. This holds also true for the concepts of capacity (note that belief functions are special capacities) and uncertainty. Both concepts will be outlined below.

Components of a Decision Problem

A decision problem is a situation, where a decision maker (in this context also called agent) has to make a choice between several actions. The outcomes/consequences C of each action depends on the states of nature X. An action can therefore be represented by a function f:X \rightarrow C , which assigns a consequence c=f(x)\in C to each state x\in X. That is, the outcome of an action is in general uncertain and it depends on the states of nature.

An illustrative example is a betting on a horse race: imagine that ten different horses start at a horse race. The states of nature X is the set of horses, consequences C are amounts of money ranging from a loss (of the ticket price) to a win. An action is a representation of a bet taken by a decision maker.

The states of nature X reflect the potential scenarios that might be realized going forward. It is uncertain which state is the true one. The states of nature describe the real-world process that generates the uncertainty. X is assumed to be exhaustive. That is, the true but unknown state of nature should belong to X. Sometimes, this is also called closed world assumption. The decision maker does not have any influence on which state is true. States of nature are endowed with a \sigma-algebra \mathcal{A} in the continuous case, whereas \mathcal{A} denotes the power set 2^X in the discrete case.
Regarding the horse race example, it is required that X contains the winning horse since every horse corresponds to a world in which this horse will win the race. A priori it is not clear which horse is going to win and the decision maker has no influence on the outcome of the race.

Subsets of X are called events and the elements of the set X are mutually exclusive as only one of them can realize.

A typical decision problem within finance is to decided how to invest regarding an universe of potential assets. The states of nature are the asset price developments regarding the relevant time horizon, the investment sizes, etc. Acts reflect the selection of a portfolio and the consequences could comprise corresponding profits and losses and risk figures.

Outcomes/consequences are mostly real numbers. However, the potential outcome can also be any other set.

For instance, when you are leaving the house in the morning without an umbrella and the weather forecast predicted rain, possible consequences might be that you get wet or that you might have to find cover from the rain, and, thus being late in the evening. The states of nature X reflect the possible weather conditions (sunny, heavy rain, rainy, etc.) and other related circumstances.

We furthermore assume in the following, that the consequences C are quantifiable and thus are represented by the real numbers \mathbb{R}. The set of all actions F comprise a space of functionals. A functional is special function, where the range is simply a real number. The functional space becomes important when it is about the integrals of f.

Next, let us discuss what types of interpretations of probability exist. This will be fruitful in the subsequent study of risk vs. uncertainty.

What is Probability?

There is no unique answer to the question of what probability actually is. Nonetheless, let us consider some (intuitive) thoughts about probabilities:

  • It is about an event or a situation, where the outcome is random and therefore not known a priori.
  • Randomness might be the lack of pattern or predictability in events and situations.
  • Probabilities should reflect the likelihoods of specific events. That is, a certain degree of knowledge about the events is needed. Otherwise, we would not be able to determine the probabilities.

Let us now discuss what features events/situations need to have, such that they can be studied using probabilities. What about real-world situations such as gambling games or other repeatable real-world events with stable conditions?

Gambling games can be modeled with a well-defined experiment since the scope and its conditions are stable. In addition, the outcome of one game (e.g. rolling dices) does usually not affect other games, i.e., they are usually independent. Games can usually be repeated arbitrarily often and they can easily be simulated. If the games are conducted properly, the interpretation of the outcomes of many games should be objective to a certain degree.
A frequentist would interpret a probability as the limit of its relative frequency in a large number of trials. This interpretation stresses the importance of experimental science (e.g. experimental physics) and it is ultimately based on the law of large numbers. Note, however, that relative frequencies cannot serve as a definition of the concept of probability to begin with. The law of large number already uses the concept of probability, thus, the limit of relative frequencies cannot be used to define it. Instead, the scope conditions (i.e. Kolmogorov Axioms) are used, such that probability concepts such as the law of large numbers are valid and can be applied.

There are of course also situations / events, that cannot be repeated / recreated under the same conditions. A financial crisis, for example, is by nature an one-off event and can therefore not be repeated under the same conditions. The global financial crisis that erupted around 2007 has had a huge impact on politics and financial regulation and thus will likely affect subsequent future crises (i.e. the events are not independent). And even if you think about several similar crises as independent realization of the same random variable, the data basis would most likely still be too small to derive any sensible conclusion from it.
An alternative approach is therefore to interpret probability as subjective beliefs. That is, a probability does not describe a property of an event but rather the subjective beliefs about it. Financial experts could be asked how likely they think it is that another financial crisis will happen going forward. Probability theory is then used to model and to update beliefs using new evidence and Bayes Theorem. However, also Bayesian statistics cannot deal with a too high level of uncertainty (refer to [2]). Please also refer to the Belief functions: past, present and future by Cuzzolin at Harvard Statistics. Cuzzolin explains potential difficulties, that may arise when Bayesian statistics is used, and he also outlines why belief functions (i.e. specific capacities) might work better.

There are also other theoretical/philosophical situations such as Pascal’s wager, where relative frequencies cannot exist.

In the following section we are going to explore what uncertainty actually is and how it is related to the concept of risk.

What is Uncertainty and Risk?

The general term uncertainty covers many aspects. Unknown states of natures, unknown probabilities, preferences of the decision maker, and many other things that are not yet known (i.e. unknown unknowns).

F. H. Knight first distinguished in 1921 between ‘risk’ and ‘uncertainty’ in his seminal book [4]. On the one hand, Knightian or complete uncertainty may be characterized as the complete absence of information or knowledge about a situation or an outcome of an event. Knightian risk, on the other hand, may be characterized as a situation, where the true information on the probability distribution is available. In practice, no parameter or distribution can be known for sure a priori. Hence, we find it preferable to think about degrees of uncertainty.

Coming back to gambling – discrete probability theory and combinatorics were conceived in the 16th and 17th century by mathematicians attempting to solve gambling problems (see [3]). Due to the fact that conditions of gambling games are well-known and stable, the degree of uncertainty is comparatively low. Good (but not prefect) estimates of the corresponding (objective) probability distributions are known. In such cases, classical probability theory and/or the classical measure theory are the right tools to tackle these types of problems. Note that even gambling games are subject to a certain degree of uncertainty. For instance, a die might be biased since it is not (and cannot be produced) perfectly symmetric.

Financial markets are complex systems involving human behavior and herd behavior. That is, financial markets are usually subject to a high level of uncertainty. Corresponding financial risks such as market or credit risk are therefore hard to guess. Important conditions such as the economic environment, financial regulations as well as the political landscape change over time. Hence, we might be able to observe a stock price on a regular basis, however, under very different circumstances. In addition, scarcity of data is quite common in finance (e.g. default-related data of companies, i.e. PDs or LGDs).

All results from a high-degree of uncertainty can be applied to risk, and results from risk can often be extended to situations with a high-degree of uncertainty.

An interesting standpoint is provided on Risk.net -Parameter und Modellrisiken in Risikoquantifizierungsmodellen (by Volker Bieta) on the models used in finance in a situation with high uncertainty.

Decision under Risk and Uncertainty

Given the uncertainty bearing on the states of nature X, it is natural to endow X with a \sigma-algebra \mathcal{A} or a power set 2^X depending on the cardinality of X.

In order to supplement the spcae (S, \mathcal{A} ) with a probability measure \mathbb{P} an objective body of evidence needs to be available to estimate and test the corresponding distribution assumption. Decision-making based on objective information on the probability distribution over the states of nature is called decision under risk. Actions can then be seen as random variables (r.v.) f:S \rightarrow C=\mathbb{R}.

If no such objective information is available or if only subjective perception over the states of nature remains, the decision-making process is called decision under uncertainty. In such cases one must extend classical probability theory and deal with capacities, for instance. The probability resulting from the subjective perception is also known as subjective probability. Refer to [4].

Decision under risk can be seen everywhere in the literature and in finance. Modern game theory and its mathematical underlying -incl. the theory of capacities- should come to the forefront as a tool for decision makers, in particular, when complex problems need to be resolved and uncertainty is present.

In the review paper [2], the mathematical basics of uncertainty in finance is explained and put into context.

Preference Relation

Before we actually study subjective preferences, we would like to point out that there are markets, where preferences should not impact the price of a good. In a so-called complete market, the price of a contingent claim is fully determined by arbitrage arguments. Here, potential risks, that may influence utility and thus the preference relation of a decision maker can be perfectly hedged away. In an incomplete market, however, not all risks can be hedged and thus preference play a role in determining the price of a good. Please refer to the great book [6] for more details.

We assume that each decision maker has its own individual preferences, beliefs and desires about how the world is or should be. A decision maker’s preference relation effects its choices and need to be taken into consideration. Faced with two different consequences x, y\in \mathcal{X}, a decision maker might prefer one over the other, that is, x > y. An element of \mathcal{X} can be interpreted as a possible choice of the decision maker, that is, \mathcal{X} contains the states of nature X. In addition, it can also be identified with a suitable subset of all corresponding probability distribution (provided that there are known probability distributions).

The relation between different preferences of a decision maker can be modeled using a binary relation > \ \subseteq  \mathcal{X}  \times  \mathcal{X} with the following properties:

This binary relation > is called (strict) preference order or preference relation of the decision maker over \mathcal{X}. Hereby, x > y reads x is preferred to y” . A binary relation can be used to compare two elements and is therefore actually a subset of \mathcal{X} \times \mathcal{X}.

Negative transitivity states that if a clear preference exists between two choices x and y, and if a third choice z is added, then there is still a choice which is at least preferable (y if z>y) or most preferable (x if x>z) [6].

Illustration of negative transitivity

The term ‘negative transitivity’ becomes obvious considering the next characterization.

A relation > \subseteq \mathcal{X} \times \mathcal{X} is negatively transitive if, and only if,

(1)   \begin{align*} x \not > y  \text{ and }  y \not > z  \Rightarrow x \not > z     \quad \forall  x,y \in X. \end{align*}


Suppose that > is negatively transitive. We must show that the preference order fulfills the implication (1). Assume that x \not > y and y \not > z but x  > z contradicting the desired implication. Then, since y\in \mathcal{X}, we either have x>y or y >z, which contradicts our initial assumption. Hence, x \not > z as desired. No suppose that > fulfills (1). We are going to show that x>y and z\in X, then either x>z or z>y is valid. Let z\in \mathcal{X} and suppose that x \not > z and z \not > y. Then, x\not > y by (1) contradicting our initial assumption. Hence, x >z or z>y must hold.

A preference relation > arranges the decision maker’s actions according to his/her preferences. It also induces a corresponding weak preference order \geq defined by

    \begin{align*}x \geq y \ :\Leftrightarrow y \not > x,\end{align*}


and an indifference relation \sim given by

    \begin{align*}x \sim y \ :\Leftrightarrow  x \geq y \text{ and } y\geq x.\end{align*}


x \geq y, for example, means that either x is preferred to y or there is no clear preference between the two choices. A reasonable decision maker should prefer a bet, which is always ‘better’ than all other possible bets according to the preference relation. On the real line \mathbb{R} it is clear what ‘better’ means since there is a natural order defined on \mathbb{R}. This natural order serves as an illustrating example for both types of preference orders.

The asymmetry together with the negative transitivity of > are equivalent to the following two respective properties of \geq:

Completeness of the weak preference order means that a decision maker is always capable of deciding between the alternatives ( y\geq x or x \geq y or both are true) presented. Transitivity tells us that if a decision maker considers x at least as good as y and y at least as good as z, then x is at least as good as z.

Let us have a closer look into why asymmetry and negative transitivity implies transitivity. To this end, assume that transitivity does not hold for a strict preference order. That is, suppose that x>y and y>z implies x\not >z. By asymmetry we infer z\not >y from y>z. By (1) we have x\not > y, which contradicts the first assumption regarding transitivity. Hence, x>z must hold.

A weak preference order can be split up into the corresponding asymmetric (>) and symmetric (\sim) part:

    \begin{align*}x > y \ :\Leftrightarrow  x \not \geq y.\end{align*}

The indifference relation \sim is an equivalence relation, that is, it is reflexive, symmetric and transitive. If we consider equivalence classes instead of single elements the discussion can be simplified without any loss of generalization.

It is preferable to use functions instead of relations. A numerical representation of a preference order > is a function U:\mathcal{X} \rightarrow \mathbb{R} such that

    \begin{align*}x > y \  \Leftrightarrow  U(x) > U(y).\end{align*}


With respect to the weak order, we have

    \begin{align*}x \geq y \  \Leftrightarrow  U(x) \geq U(y).\end{align*}



Please note that such a numerical representation is not unique. For instance, if f is any strictly increasing function, then U^*(x):=f(U(x)) is again a numerical representation. von Neumann and Morgenstern proved that if ones preferences meet certain criteria, those preferences can be represented by unique (up to positive affine transformations) numerical representation U of >, see [7]. These criteria are the Archimedean and the independence axioms. For more details, we refer to the highly recommended book [6].

Expected Value and Utility

But how shall we determine a preference relation?

Two lotteries (choices) for a decision maker – what would you pick and why?

Let us assume that a decision maker is presented the two lotteries as illustrated above. A lottery is a discrete distribution of probability on a (sub-)set of the states of nature X. A naive approach is to use the expected value of the corresponding random variables to determine the preference of an individual. Then, one would prefer EUR 8.25mn = 0.75 \cdot EUR 11mn over EUR 8mn = 0.8 \cdot EUR 10mn.

This idea, however, seems not quite compatible with human behavior. Are utility and expected payoff (always) the same? There is no clear answer to this question as it actually depends on the individual.

The famous St. Petersburg Paradox [8] illustrates that many persons are rather risk avers:

  • A fair coin is tossed repeatedly until it comes up heads for the first time. Let us say this happens on the n-th toss;
  • The payoff is then € 2^n;
  • What is the most you would be willing to pay for this bet?

It turns out that the expected payoff of this bet is \infty. The argument is that the payoff increases and the likelihood decreases at the same exponential pace. Let p_i=\frac{1}{2^i} be the likelihood that heads comes up first on the i-th toss and x_i=2^i is the corresponding payoff:

    \begin{align*}\sum_{i=1}^{\infty}{p_i \cdot x_i} &= \frac{1}{2} \cdot 2+  \frac{1}{4} \cdot 4+  \frac{1}{8} \cdot 8 + \ldots +  \frac{1}{2^i} \cdot 2^i + \ldots \\&= 1+1+ \ldots+1+ \ldots = \infty.\end{align*}

Even though it is very unlikely that the first head comes up the first time, let us say, on the 1000-th toss, it is possible and contributes to the expected payoff with a huge amount of 2^{1000}.

How much would you pay now that you know the expected monetary payoff is \infty?

Note that it is impossible to gain infinite wealth in real world since our resources are finite. In addition, one has to play the game very, very, … often to actually converge (in theory) towards infinity.

For most people their subjective utility (i.e. how much something matters to this individual) and the objective expected payoff diverge [8]. Usually, money has a declining marginal ‘utility’ and it is the expected utility (not the expected objective monetary payoffs) that rationality requires us to maximize provided that certain assumptions are met.

The following Vsauce2 video is also dedicated to the St. Peterburg Paradox and the expected utility theory.

In the following section it is assumed that the utility function u of a decision maker is known.

Utility Functions and Decisions under Risk

Decisions under risk assumes that objective probabilities are known for all states of nature X. That is, for each possible choice of the decision maker a corresponding probability distribution on a given subset of scenarios of X exists. Hence, the set \mathcal{X} can be identified with a subset \mathcal{D} of all probability spaces on (X, \mathcal{A}). In this context and provided that X is discrete, the probability spaces can also be identified with lotteries.

For many individuals the expected value does not equal their subjective utility. Hence, the idea is to use a individual so-called utility function to personalize the value of the event and/or the situation. A utility function is a mapping u: C \rightarrow  \mathbb{R}, which quantifies the preference relation > of the decision maker.

Extended illustration of sets, mappings and spaces

The larger u(c) the better from a point of view of the decision maker with utility function u. For example, if c_1, c_2\in C are two possible consequences and u(c_1) \leq u(c_2), then the consequence c_2 is at least as good as c_1.

The St. Petersburg Paradox can be resolved by applying u(x_i)=\log_2(x_i) as possible utility function:

    \begin{align*}\sum_{n=1}^{\infty}{p_n  \cdot \log_2(2^n)} &= \frac{1}{2} \cdot 1+  \frac{1}{4} \cdot 2+   \frac{1}{8} \cdot 3 + \ldots +  \frac{1}{2^n} \cdot n + \ldots \\&= \frac{1}{2} + \frac{2}{4}  + \frac{3}{8} +  \ldots \\&= \sum_{n=1}^{\infty}{\frac{n}{2^n}} = 2.\end{align*}

Each and every individual might have its own subjective utility function, however, the probabilities are known and objective.

Recall that we have assumed C= \mathbb{R}, such that we restrict our considerations to functions u:\mathbb{R} \rightarrow \mathbb{R}. The domain of u can be considered as the decision maker’s accumulated wealth and the range of u as the decision maker’s utility (i.e. satisfaction with respect to the cumulative wealth).

But what is the effect of risk on the utility function u?

Illustration of different types of utility functions

As motivated by the St. Petersburg Paradox, there are many persons who are avers to risk-taking. Given the diversity among people there are also other types of persons. We distinguish between three types [9]:

Risk Neutral Utility
Risk-neutrality holds if every prospect is indifferent to its expected value. That is, the marginal utility is constant with increasing wealth.

Risk Averse Utility
Risk-aversion holds if every prospect is less preferred than its expected value. That is, the marginal utility of wealth.

Risk Seeking Utility
Risk-seeking holds if every prospect is indifferent to its expected value. That is, the marginal utility increases with increasing wealth.

Please note that the above descriptions are not very rigorous, however, they provide a good idea on how utility functions might be classified.

Overview on utility function and its categorization

Please refer to [2] for an introduction and an overview on the mathematical treatment of decision problems with a high degree of uncertainty. In particular, so-called capacities and the Choquet integral are introduced.

Literature:
[1]

Grabisch, M. (2016) Set functions, games and capacities in decision making. 1st ed. 2016. Switzerland: Springer (Theory and decision library C, volume 46).

[2]
von Felbert, A. (2019) Uncertainty and Capacities in Finance, www.deep-mind.org. Available at: http://www.deep-mind.org/2019/06/01/uncertainty-and-capacities-in-finance/.

[3]
Essam El-Seidy, Hussein, E.S. and Alabdala, A.T. (no date) ‘Models of Combinatorial Games and Some Applications: A Survey’, 5(2), pp. 27–41.

[4]
Set functions, games and capacities in decision making (2016). New York, NY: Springer Berlin Heidelberg.

[5]
Frank H. Knight (1964) Risk, Uncertainty and Profit. August M. Kelley, New York (Reprints of Economic Classics). Available at: https://mises.org/sites/default/files/Risk,%20Uncertainty,%20and%20Profit_4.pdf.

[6]
Föllmer, H. and Schied, A. (2004) Stochastic finance: an introduction in discrete time. 2., rev.extended ed. Berlin: de Gruyter (De Gruyter studies in mathematics, 27).

[7]
Von Neumann, J. and Morgenstern, O. (2007) Theory of games and economic behavior. 60th anniversary ed. Princeton, N.J. ; Woodstock: Princeton University Press (Princeton classic editions).

[8]
Daniel Bernoulli (1954) ‘Exposition of a New Theory on the Measurement of Risk’, 22(1), pp. 23–36. Available at: http://links.jstor.org/sici?sici=0012-9682%28195401%2922%3A1%3C23%3AEOANTO%3E2.0.CO%3B2-X.

[9]
Wakker, P.P. (2010) Prospect theory: for risk and ambiguity. Cambridge ; New York: Cambridge University Press.