Showing posts with label decision tree. Show all posts
Showing posts with label decision tree. Show all posts

Saturday 21 November 2009

Understanding Risk and Decision Making

Key ideas:

Probability is the likelihood of an outcome.  Probabilities are expressed numerically, but are often subjective.

Impact is the effect that a particular outcome will have.

Decision trees help us get a grip on our alternatives.

The concept of expected value helps us compare alternatives based on probability and impact.

Risk profies take us beyond expected value to consider unacceptable or fatal downsides.

Getting more information to reduce subjectivity in decision making takes time and costs money


Ref:
Risk:  How to make decisions in an uncertain world
Editor:  Zeger Degraeve

Making Life Decisions: appraising cost, risk and expected value, with limited information about the future

The dice games are simple parallels with the type of decision we take every day in our lives.  Investments offer the most direct comparison.  With a limited sum to invest, you have to evaluate the probability of making a profit, the expected value and the risk involved for each investment alternative.  And, as with the dice, you hve the alternative not to play, which is 100% safe, but will not make you any money.

We make other kinds of decisions too, where the investment is not always financial:
  • selecting a savings account (which will make you richest in the long term?)
  • buying a house ( will prices fall or rise?)
  • deciding which people to socialise with (who will turn out to be better company?)
  • renting a film to watch (which will you enjoy the most?)

However vaguely or subconsciously, we are appraising cost, risk and expected value, with limited information about the future, all the time - even if the only cost is our leisure time, the only expected value a fleeting enjoyment, and the only potential loss a mild feeling of irritation.

Decision making: Risk, Probability, Impact, Subjectivity, Decision trees and Expected Value

You are invited to play dice games version A and version B.  In this game, you bet $1 on the throw of a dice.  Throwing a six wins a prize; throwing any other number means you lose your $1.

In version A of this game, a bet costs $1, but you can win $10.  Faced with this game, you have two alternatives - to play or not to play.  Once playing, there is nothing you can do to affect the outcome - so your decision on whether to play has to be made on the basis of the probabilities and impacts involved.  They are depicted on the decision tree here to help your decision.

http://spreadsheets.google.com/pub?key=te9MzyHoIN6EyuoHmfDxMaw&output=html

Because the situation is simple, the probabilities of the various possible outcomes can be objectively known.  There is no subjectivity over the probabilities.  The impacts, too, are fixed and clearly set out by the rules of the games (the prizes and the cost of playing).  If a choice is made to play, the probability of winning is 1 in 6 (0.166 or 16.6%) and the probability of losing 5 in 6 (0.834 or 83.4%).  If a choice is made not to play, risk is avoided (there is a single outcome that is certain) but there is also no potential benefit. 

In version B of the dice game, the stake and odds remain the same, but you can only win $5.  The alternative not to play remains.  In each case, we have to decide whether to play or not.  There is the alternative to walk away, but this offers no benefit.  Is it better to play or not to play?  Version A seems better than version B, but how much better?  Is B worth playing as well, despite the lower prize?  How can we make a decision about where to make an investment?  Most people can offer answers to these questions based on an intuitive, subjective grasp of probability and impact.  We make decisions all the time on this basis.  But for business decisions, we need to move beyond subjectivity whenever we can.  We need to quantify things wherever possible.


The concept of expected value (EV)

To compare different alternatives against each other in a quantitative way in order to determine whether a risk is worth taking, we can use the concept of expected value (EV).  The expected value of a risk is obtained by multiplying probability by impact for each possible outcome, and adding all the results together.  If a particular impact is negative, the value for that outcome is also negative. 

The table below shows the expected value calculation for playing version A of the dice game.  The expected value is 0.66.  Because this is a positive value, it indicates that the game is worth playing.

http://spreadsheets.google.com/pub?key=te9MzyHoIN6EyuoHmfDxMaw&output=html

In version B, because of the reduced prize (a variation in impact), the picture is different.  This is shown in the table also.  Because of the reduced prize, the expected value of version B is negative.  If you play it repeatedly, you will steadily lose money over time.

In this case, the alternative not to play, although it brings no benefit, has a higher expected value (zero) than playing (-0.17).  You are better off keeping your $1.

Expected value helps us ascertain whether a particular alternative is worth taking, based on our knowledge of probabilities and impacts.  But, unless the outcome of a decision is certain, expected value can only ever be used as a guide.

In version A, for example, the expected value of not playing is zero, and this is certain.  But if you decide to play, the only possible outcomes are winning $10 or losing your $1 - in other words, values of either +9 or -1.  An impact of +0.66 (the expected value) is impossible. 
And, while a positive expected value of 0.66 makes the game nominally 'worth playing', the outcome of playing is not certain.  You might still lose.

Conversely, the negative expected value of version B, while it indicates you should not play, doesn't necessarily mean you won't win if you do.  The possible outcomes are value of +$4 or - $1.  You might play once and win.  You might even play three times in a row and win all three times, although the probabhility of this is 0.0046 (or less than 1%).  Despite the negative expected value, a positive outcome remains possible.

The actual probability of realising the expected value as a result of a single decision is zero.  However, if you played version A 100 times, you would find the average value across those many decisions tending towards 0.66 - you would have around $166 in your pocket.  This would prove the accuracy of your initial calculation of expected value.

Calcuating or estimating expected value wrongly - or not wanting to calculate it at all - has serious consequences for decision making.  Consider the National Lottery.  Although the prize (potential upside) is enormous, the tiny probability of winning gives the game a negative expected value.  But the lure of the prize outweighs the rational considerations of probability, making people mentally distort probabilities (if they consciously think in those terms at all) and decide to take an illogical risk.  This is the essence of the appeal of gambling, and points the way towards the psychology of risk.

So, despite the name, we can never expect the expected value.  Some may ask, in that case, why use the concept at all?  The answer is to help in making decisions, rather than in predicting the future.  As we've seen, there are no facts about the future, only probabilities.  In this case, probabilities are known but a reliable prediction of the outcome remains impossible - the dice will decide!

We have already seen how, in most business decisions, the picture is clouded by subjectivity.  Not only is it impossible to predict the future, there will also be uncertainty over impacts and probabilities.

Expected value is calculated from probability and impact information or estimates.  Whatever subjectivity or imprecision is inherent in our probability and impact figures will feed through into expected values.  There are only as good as the information from which they are calculated.  Therefore, just as with probabilities, it is important to remember, and explain to others, when subjectivity is a factor.

Friday 20 November 2009

Using Decision trees to see how probability and impact relate to each other

We can use the simple example of a dice game.  In this game, you bet $1 on the throw of a dice.  Throwing a six wins a prize; throwing any other number means you lose your $1.

In version A of this game:

A bet costs $1, but you can win $10

Faced with this game, you have two alternatives - to play or not to play.

Once playing, there is nothing you can do to affect the outcome - so your decision on whether to play has to be made on the basis of the probabilities and impacts involved. 

Because the situation is simple, the probabilities of the various possible outcomes can be objectively known.  There is no subjectivity over the probabilities.  The impacts, too, are fixed and clearly set out by the rules of the game (the prizes and the cost of playing). 

If a choice is made to play, the probability of winning is 1 in 6 (0.166 or 16.66%) and the probability of losing 5 in 6 (0.834 or 83.4%). 

If a choice is made not to play, risk is avoided (there is a single outcome that is certain) but there is also no potential benefit.


Decision tree for dice game version A:

Decision:  Play dice game with chance of winning $10?  Yes or No

NO
Decision ---->  Risky Event  ---> Possible outcomes ---->   Probability ----->  Impact

No   ----->  Nil ------>  Avoid risk, keep money in pocket ----> 1.0 (certain) ----->  Neurtral: spend ntohing, win nothing


YES
Decision ----> Risky Event ---> Possible outcomes ----> Probability -----> Impact


Yes ----->  Stake $1 on throw of dice ----> Number 6  ----> 0.166 (1 in 6) ----> Gain Spend $1 Win $10
or
Yes -----> Stake $1 on throw of dice -----> Number 1, 2, 3, 4, or 5  -----> 0.834 (5 in 6) ---->  Loss Spend $1 Win nothing


http://spreadsheets.google.com/pub?key=te9MzyHoIN6EyuoHmfDxMaw&output=html