+ All Categories
Home > Documents > Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Date post: 28-Nov-2014
Category:
Upload: bilisoly
View: 603 times
Download: 0 times
Share this document with a friend
Description:
How to use Mathematica to analyze board games like Chutes and Ladders and Monopoly using Markov Chains.
39
Using Mathematica for Analyzing Board Games to Teach the Basics of Markov Chains Academic Computing Conference April 5, 2008 Central Connecticut State University Central Connecticut State University Roger Bilisoly, Ph.D. Department of Mathematical Sciences Central Connecticut State University New Britain, Connecticut
Transcript
Page 1: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Using Mathematica for Analyzing Board Games to Teach the Basics of Markov Chains

Academic Computing ConferenceApril 5, 2008

Central Connecticut State UniversityCentral Connecticut State University

Roger Bilisoly, Ph.D.Department of Mathematical SciencesCentral Connecticut State University

New Britain, Connecticut

Page 2: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Conditional Probability

• Let A, B be two events

• P(A | B) = Probability of A given that B has happened.

• P(A | B) = P(A and B)/P(B)• P(A | B) = P(A and B)/P(B)

– This is the proportion of B’s that are also A’s

• Example: Draw 5 cards

– P(5th card an Ace | first 4 cards are Aces) = 0

Roger Bilisoly 4-08

Page 3: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Independence

• If P(A | B) = P(A), then knowing about B makes no difference when computing the probability of A

• Example: Let A = roll of a die and B = the • Example: Let A = roll of a die and B = the result of a coin flip. Obviously we have:

– P(A | B) = P(A)

– P(B | A) = P(B)

• This implies P(A and B) = P(A)*P(B)

Roger Bilisoly 4-08

Page 4: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Random Variables

• A random variable is a random number generator

• For example

– Roll 2 dice, let X = sum of values– Roll 2 dice, let X = sum of values

– Flip 10 coins, let X = # of tails

• Since random variables describe events, they can be used in conditional probabilities

Roger Bilisoly 4-08

Page 5: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Markov Chains

• This is a sequence of random variables X1, X2, X3, … such that– P(Xn | Xn-1, Xn-2, …, X1) = P(Xn | Xn-1)

– That is, given the immediately preceding – That is, given the immediately preceding variable Xn-1, the rest of the variables X1, …, Xn-2 do not add any further information. This is an example of conditional independence.

– Note: This does not mean Xn is independent of X1 or of X2 or of X3, …. In fact, Xn is usually dependent with all of these.

Roger Bilisoly 4-08

Page 6: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Markov Chains - Continued

• X1, X2, X3, … could be a sequence of events evolving over time, but this need not be the case.

• Events evolving over time might be • Events evolving over time might be modeled as a Markov chain, but the random variables need not match the times in a one to one fashion– We will see an example of this with the game

Monopoly

Roger Bilisoly 4-08

Page 7: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Board Games: E.g., Monopoly

From http://www.worldofmonopoly.co.uk/history/images/bd-usa.jpg

Page 8: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Monopoly and Markov Chains

• Number the squares 1, 2, 3, …, 40, where 1=Go, 2=Mediterranean Avenue, etc.

• Let X1 = Position after first roll of dice, X2 = Position after second roll, and so forth.Position after second roll, and so forth.

• Note that we have

– P(Xn | Xn-1, Xn-2, …, X1) = P(Xn | Xn-1)

– That is, the next position only requires knowing the current position.

Roger Bilisoly 4-08

Page 9: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Transition Probability Matrix

• Key to understanding a Markov chain is knowing the probabilities from any state to any other state.

• For example, in Monopoly, what are the • For example, in Monopoly, what are the probabilities from any square to any other square?

– Moves are determined by dice, but there are complications: doubles, Community Chest and Chance cards, the Go to Jail square.

Roger Bilisoly 4-08

Page 10: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Simplest Board Game:A Straight Line

• Start on the left. Goal is to get to the last square on the right. Move by tossing a coin, H = move 1 to the right, T = move 2 to the right.

0 2 4 6 8 10

0.40.60.81.0

Square 1 Square 2 Square 10

Roger Bilisoly 4-08

Page 11: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Linear Game Example

• P(1 to 2) = ½, P(1 to 3) = ½, P(1 to other) = 0

• P(2 to 3) = ½, P(2 to 4) = ½, P(2 to other) = 0

• P(3 to 4) = ½, P(3 to 5) = ½, P(3 to other) = 0

• …• …

• P(8 to 9) = ½, P(8 to 10) = ½, P(8 to other) = 0

• What about P(9 to 10)?

0 2 4 6 8 10

0.40.60.81.0

Roger Bilisoly 4-08

Page 12: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Example, Continued

• P(9 to 10) = 1 is a possibility

• P(9 to 10) = ½, P(9 to 9) = ½ is another possibility

– Both of these are used in actual games– Both of these are used in actual games

Is there an even shorter way to summarize this information? Yes, using matrices.

Roger Bilisoly 4-08

Page 13: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Matrix of Probabilities

Rows represent current position, columns represent next position.

Last row indicates that once a player gets to

0 1

2

1

20 0 0 0 0 0 0

0 0 1

2

1

20 0 0 0 0 0

0 0 0 1

2

1

20 0 0 0 0

0 0 0 0 1

2

1

20 0 0 0

Square 1

Square 2

Square 3

once a player gets to the last square, he or she stays there forever.

0 0 0 02 2

0 0 0 0

0 0 0 0 0 1

2

1

20 0 0

0 0 0 0 0 0 1

2

1

20 0

0 0 0 0 0 0 0 1

2

1

20

0 0 0 0 0 0 0 0 1

2

1

2

0 0 0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 0 0 1

Square 1Square 2

Roger Bilisoly 4-08

Page 14: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Transient vs. Recurrent States

• States are visited either a finite number of times or infinitely often.

• Former are called transient, the latter recurrentrecurrent

• Example: Squares 1 through 9 are transient, square 10 is recurrent.

0 2 4 6 8 10

0.40.60.81.0

Roger Bilisoly 4-08

Page 15: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Game with both Transient and Recurrent States

• A linear game attached to one or more loops provides an example of this.

0 .5 .5 0 0 0 0 0 0 0 0

0 0 .5 .5 0 0 0 0 0 0 0

0 0 0 .5 .5 0 0 0 0 0 0– Example:

Start

0 0 0 .5 .5 0 0 0 0 0 0

0 0 0 0 .5 .5 0 0 0 0 0

0 0 0 0 0 .5 .5 0 0 0 0

0 0 0 0 0 0 .5 .5 0 0 0

0 0 0 0 0 0 0 .5 .5 0 0

0 0 0 0 0 0 0 0 .5 .5 0

0 0 0 0 0 0 0 0 0 .5 .5

0 0 0 .5 0 0 0 0 0 0 .5

0 0 0 .5 .5 0 0 0 0 0 0

P =

Roger Bilisoly 4-08

Page 16: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Transient vs. Recurrent

• Knowing the mean number of times a transient state occurs is interesting.

• Knowing the probability of reaching a recurrent state is interesting.recurrent state is interesting.

• Both of these questions can be easily answered using linear algebra.

Roger Bilisoly 4-08

Page 17: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Transient States: Mean Number of Visits

• Let P = matrix of transition probabilities.

• Let PT be the submatrix of P corresponding to only the transient states.

0 1

2

1

20 0 0 0 0 0 0

0 0 1 1 0 0 0 0 0 0

0 1

2

1

20 0 0 0 0 0

1 10 0 1

2

1

20 0 0 0 0 0

0 0 0 1

2

1

20 0 0 0 0

0 0 0 0 1

2

1

20 0 0 0

0 0 0 0 0 1

2

1

20 0 0

0 0 0 0 0 0 1

2

1

20 0

0 0 0 0 0 0 0 1

2

1

20

0 0 0 0 0 0 0 0 1

2

1

2

0 0 0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 0 0 1

0 0 1

2

1

20 0 0 0 0

0 0 0 1

2

1

20 0 0 0

0 0 0 0 1

2

1

20 0 0

0 0 0 0 0 1

2

1

20 0

0 0 0 0 0 0 1

2

1

20

0 0 0 0 0 0 0 1

2

1

2

0 0 0 0 0 0 0 0 1

2

0 0 0 0 0 0 0 0 0

PT =P =

Roger Bilisoly 4-08

Page 18: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Mean # of Visits(Using Mathematica)

• S = (I – PT)-1 gives mean number of visits starting at any state.

1. 0.5 0.75 0.625 0.6875 0.65625 0.671875 0.664063 0.667969

0. 1. 0.5 0.75 0.625 0.6875 0.65625 0.671875 0.664063

0. 0. 1. 0.5 0.75 0.625 0.6875 0.65625 0.671875

On average player moves 1.5 squares, so mean time on a square converges to 1/1.5 = 2/3.

0. 0. 1. 0.5 0.75 0.625 0.6875 0.65625 0.671875

0. 0. 0. 1. 0.5 0.75 0.625 0.6875 0.65625

0. 0. 0. 0. 1. 0.5 0.75 0.625 0.6875

0. 0. 0. 0. 0. 1. 0.5 0.75 0.625

0. 0. 0. 0. 0. 0. 1. 0.5 0.75

0. 0. 0. 0. 0. 0. 0. 1. 0.5

0. 0. 0. 0. 0. 0. 0. 0. 1.

First row gives results starting at first square. For example, entry (1,1) = 1 means 1 visit on average (player starts on first square and must

move forward one or two squares, so exactly one visit is guaranteed). Entry (1,2) = 0.5 since 50% chance of going one square to the right.

Roger Bilisoly 4-08

Page 19: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Gambler’s Ruin

• Start with $x, win or lose $1 each play. Stop when $0 or $max is reached.

• This is a board game where winning moves a square to the right, losing a square to the left.square to the right, losing a square to the left.

• Now there are two recurrent states: one for $0 and one for $max.

• Two questions:

– What is the probability of reaching $0 and $max?

– What is the mean number of visits to the other states?

Roger Bilisoly 4-08

Page 20: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Gambler’s Ruin for $0 and $10(Fair payoffs)

PT =

0 1

20 0 0 0 0 0 0

1

20 1

20 0 0 0 0 0

0 1

20 1

20 0 0 0 0

0 0 1

20 1

20 0 0 0

0 0 0 1

20 1

20 0 0

0 0 0 0 1 0 1 0 0

Here P(Win) = P(Lose) = ½

0 0 0 0 1

20 1

20 0

0 0 0 0 0 1

20 1

20

0 0 0 0 0 0 1

20 1

2

0 0 0 0 0 0 0 1

20

S=(I – PT)-1 =

1.8 1.6 1.4 1.2 1. 0.8 0.6 0.4 0.2

1.6 3.2 2.8 2.4 2. 1.6 1.2 0.8 0.4

1.4 2.8 4.2 3.6 3. 2.4 1.8 1.2 0.6

1.2 2.4 3.6 4.8 4. 3.2 2.4 1.6 0.8

1. 2. 3. 4. 5. 4. 3. 2. 1.

0.8 1.6 2.4 3.2 4. 4.8 3.6 2.4 1.2

0.6 1.2 1.8 2.4 3. 3.6 4.2 2.8 1.4

0.4 0.8 1.2 1.6 2. 2.4 2.8 3.2 1.6

0.2 0.4 0.6 0.8 1. 1.2 1.4 1.6 1.8 Roger Bilisoly 4-08

Page 21: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Gambler’s Ruin:Recurrent State Probabilities

• Let R = matrix of transition probabilities from transient to recurrent states.

• Let F = matrix of eventual probabilities of reaching recurrent states from transient states.

• In general• In general– F = SR = (I – PT)-1 R– Example at right

R =

1

20

0 0

0 0

0 0

0 0

0 0

0 0

0 0

0 1

2

9

10

1

10

4

5

1

5

7

10

3

10

3

5

2

5

1

2

1

2

2

5

3

5

3

10

7

10

1

5

4

5

1

10

9

10

F =

Roger Bilisoly 4-08

Page 22: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Gambler’s Ruin for U.S. Roulette

1 0 0 0 0 0 0 0 0 0 010

190 9

190 0 0 0 0 0 0 0

0 10

190 9

190 0 0 0 0 0 0

0 0 10

190 9

190 0 0 0 0 0

0 0 0 10

190 9

190 0 0 0 0

10 9

0.940518 0.0594822

0.874426 0.125574

0.800992 0.199008

0.719397 0.280603

0.628737 0.3712630 0 0 0 10

190 9

190 0 0 0

0 0 0 0 0 10

190 9

190 0 0

0 0 0 0 0 0 10

190 9

190 0

0 0 0 0 0 0 0 10

190 9

190

0 0 0 0 0 0 0 0 10

190 9

19

0 0 0 0 0 0 0 0 0 0 1

0.628737 0.371263

0.528003 0.471997

0.416077 0.583923

0.291715 0.708285

0.153534 0.846466

F matrixP matrix

Note bias for reaching $0

Roger Bilisoly 4-08

Page 23: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Simplified Version of Chutes and Ladders

Game is 20 squares long.Must land on last square to win.Square 7 goes to 14, Square 17 goes to 3. Expected number of times per square given below.

0 1

6

1

6

1

6

1

6

1

60 0 0 0 0 0 0 1

60 0 0 0 0 0

0 0 1

6

1

6

1

6

1

60 1

60 0 0 0 0 1

60 0 0 0 0 0

0 0 0 1

6

1

6

1

60 1

6

1

60 0 0 0 1

60 0 0 0 0 0

0 0 0 0 1

6

1

60 1

6

1

6

1

60 0 0 1

60 0 0 0 0 0

0 0 0 0 0 1

60 1

6

1

6

1

6

1

60 0 1

60 0 0 0 0 0

0 0 0 0 0 0 0 1

6

1

6

1

6

1

6

1

60 1

60 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 01 1 1 1 1 1

0 5 10 15 200.0

0.5

1.0

1.5

2.0

2.5

3.0

0 0 0 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0

0 0 0 0 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0

0 0 0 0 0 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

60 0 0 0

0 0 1

60 0 0 0 0 0 0 0 1

6

1

6

1

6

1

6

1

60 0 0 0

0 0 1

60 0 0 0 0 0 0 0 0 1

6

1

6

1

6

1

60 1

60 0

0 0 1

60 0 0 0 0 0 0 0 0 0 1

6

1

6

1

60 1

6

1

60

0 0 1

60 0 0 0 0 0 0 0 0 0 0 1

6

1

60 1

6

1

6

1

6

0 0 1

60 0 0 0 0 0 0 0 0 0 0 1

6

1

60 1

6

1

6

1

6

0 0 1

60 0 0 0 0 0 0 0 0 0 0 0 1

30 1

6

1

6

1

6

0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2

3

1

6

1

6

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5

6

1

6

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

Exp

ecte

d #

Square #

Roger Bilisoly 4-08

Page 24: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Full Version of Chutes and Ladders

• This makes a good project, or a good example to discuss in class.

• “Using Games to Teach Markov Chains” • “Using Games to Teach Markov Chains” by Roger Johnson.

– See: http://findarticles.com/p/articles/mi_qa3997/is_200312/ai_n9338086

Roger Bilisoly 4-08

Page 25: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Looping Board Games

• In games that loop, the states are usually recurrent.

• Hence, long term probabilities are of interest.4

1 2 3 4

1

2

3These probabilities are the eigenvalues of P, the transitionprobability matrix. This is easilydone with Mathematica.

Roger Bilisoly 4-08

Page 26: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Example 1:12 Square Loop

• Use one die for movement and the 12 square loop of last slide. We have:

0 1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0

0 0 1

6

1

6

1

6

1

6

1

6

1

60 0 0 0

0 0 0 1 1 1 1 1 1 0 0 0

Eigenvalues all equal 1/12.

This P has columns that also add 0 0 0 1

6

1

6

1

6

1

6

1

6

1

60 0 0

0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

60 0

0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

60

0 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0 1

6

1

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0

P =

This P has columns that also addto one, which is a doubly

stochastic matrix. Such a matrixof size n by n has all eigenvaluesequal to 1/n.

Using the same randomization device for moving from each square results in a doubly stochastic matrix.

Roger Bilisoly 4-08

Page 27: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Example 2:12 Square Loop

0 0.1 0.2 0 0.5 0.3 0 0 0 0 0 0

0 0 0.1 0.2 0 0.5 0.3 0 0 0 0 0

0 0 0 0.1 0.2 0 0.5 0.3 0 0 0 0

0 0 0 0 0.1 0.2 0 0.5 0.3 0 0 0

0 0 0 0 0 0.1 0.2 0 0.5 0.3 0 0

0 0 0 0 0 0 0.1 0.2 0 0.5 0.3 0

0 0 0 0 0 0 0 0.1 0.2 0 0.5 0.3

0.3 0 0 0 0 0 0 0 0.1 0.2 0 0.5

P =

0.3 0 0 0 0 0 0 0 0.1 0.2 0 0.5

0.5 0.3 0 0 0 0 0 0 0 0.1 0.2 0

0 0.5 0.3 0 0 0 0 0 0 0 0.1 0.2

0.2 0 0.5 0.3 0 0 0 0 0 0 0 0.1

0.1 0.2 0 0.5 0.3 0 0 0 0 0 0 0

Here P(Advance 1) = 0.1, P(Advance 2) = 0.2, P(Advance 4) = 0.5,P(Advance 5) = 0.3 for every square. Again this is doubly stochastic, hence the limiting probabilities are 1/12.

Roger Bilisoly 4-08

Page 28: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Periodicity

• Let Pii = P(state i to state i)

• Let Pnii = P(state i to state i in n moves)

• If Pnii = 0 when d does not divide n (and d

is the largest such integer), then the is the largest such integer), then the Markov chain has period d.

• Example: For the 12 square loop, let P(Advance 1) = P(Move back 1) = ½, which is a random walk on the loop. This has periodicity = 2 (see next slide).

Roger Bilisoly 4-08

Page 29: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Example of Periodicity:Random Walk on Loop

• A player can only return back to any square after an even number of moves.

0 1

20 0 0 0 0 0 0 0 0 1

2

1

20 1

20 0 0 0 0 0 0 0 0

0 1

20 1

20 0 0 0 0 0 0 0

0 0 1

20 1

20 0 0 0 0 0 0number of moves.

• Limiting probabilities are 1/6 for “even squares” after an odd number of moves, and 1/6 for “odd squares” after an even number of moves.

2 2

0 0 0 1

20 1

20 0 0 0 0 0

0 0 0 0 1

20 1

20 0 0 0 0

0 0 0 0 0 1

20 1

20 0 0 0

0 0 0 0 0 0 1

20 1

20 0 0

0 0 0 0 0 0 0 1

20 1

20 0

0 0 0 0 0 0 0 0 1

20 1

20

0 0 0 0 0 0 0 0 0 1

20 1

2

1

20 0 0 0 0 0 0 0 0 1

20

P =

Roger Bilisoly 4-08

Page 30: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Different Limiting Probabilities Require Heterogeneous Moves!

• If movement were solely dependent on the roll of two dice, then the limiting probabilities are all 1/40 (since there are 40 squares in a monopoly board).

• For example, in Monopoly, there is a “Go to Jail” square, and some Chance and Community Chest cards direct the player to go to certain squares. These changes alter the limiting probabilities.

Roger Bilisoly 4-08

Page 31: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Board with “Go to Jail” Square(Moves based on sum of 2 dice)

Jail

Move C

lockw

ise

Now the limiting probabilities are unequal. Note that the “Go to Jail” has 0 probability since the move never ends on that square.

{0.0229,0.0231,0.0233,0.0236,0.0232,0.023,0.0229,0.0229,0.023,0.0231,0.05,0.0231,0.0239,0.0246,0.0253,0.0261,0.027,0.028,0.0276,0.0273,

0.0271,0.0269,0.0267,0.0264,0.0268,0.027,0.0271,0.0271,0.027,0.0269,0,0.0269,0.0261,0.0254,0.0247,0.0239,0.023,0.022,0.0224,0.0227}

Start

Go to JailMove C

lockw

ise

Besides Jail, the most likely square is 7 squares past Jail (with probability = 0.0280).

Roger Bilisoly 4-08

Page 32: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Approximate Monopoly

• Ignore that three doubles in a row means going to jail.• There are 10 (of 16) Chance cards that relocate the

player: Advance to Go, Advance to Illinois Avenue, Advance to next Utility, 2 Advance to nearest railroad, Advance to St. Charles Place, Go to Jail, Go to Reading RR, Go to Boardwalk, Go back 3 spacesRR, Go to Boardwalk, Go back 3 spaces

• There are 2 (of 16) Community Chest cards that relocate the player: Advance to Go, Go to Jail.

• Assume that each card has equal chance of appearing (equivalent to shuffling the cards each time a player lands on Chance or Community Chest).

• Assume players immediately pay to get out of jail.– The best strategy for the beginning of the game

Roger Bilisoly 4-08

Page 33: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

7

5760 7

288

1

18

3

32

35

288

5

36

1

16

5

36

1

9

55

576

19

288

11

2880 0 1

480 0 0 0 0 0 0 0 1

960 0 0 0 0 0 0 0 0 0 0 0 0 0 1

96

5

5760 0 1

36

37

576

53

576

1

9

5

96

1

6

5

36

23

192

53

576

37

576

1

360 5

2880 0 0 0 0 0 0 0 5

5760 0 0 0 0 0 0 0 0 0 0 0 0 0 5

576

1

1440 0 0 5

144

1

16

1

12

1

24

5

36

1

6

7

48

17

144

13

144

1

18

1

36

1

720 0 0 0 0 0 0 0 1

1440 0 0 0 0 0 0 0 0 0 0 0 0 0 1

144

1

1920 0 0 1

192

19

576

1

18

1

32

1

9

5

36

11

64

83

576

67

576

1

12

1

18

11

2880 0 0 0 0 0 0 0 1

1920 0 0 0 0 0 0 0 0 0 0 0 0 0 1

192

1

2880 0 0 1

288

1

288

1

36

1

48

1

12

1

9

41

288

49

288

41

288

1

9

1

12

1

16

1

360 0 0 0 0 0 0 1

2880 0 0 0 0 0 0 0 0 0 0 0 0 0 1

288

1

2880 0 0 1

576

1

5760 1

96

1

18

1

12

11

96

9

64

97

576

5

36

1

9

25

288

1

18

7

2880 0 0 0 0 0 1

5760 0 0 0 0 0 0 0 0 0 0 0 0 0 1

576

1

2880 0 0 0 0 0 0 1

36

1

18

25

288

1

9

5

36

1

6

5

36

1

9

1

12

7

144

1

360 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1

1920 0 0 0 0 0 0 0 1

36

35

576

1

12

1

9

5

36

1

6

5

36

1

9

7

96

1

18

1

360 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1

1440 0 0 0 0 0 0 0 0 5

144

1

18

1

12

1

9

5

36

1

6

5

36

7

72

1

12

1

18

1

360 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

5

5760 0 0 0 0 0 0 0 0 5

576

1

36

1

18

1

12

1

9

5

36

1

6

35

288

1

9

1

12

1

18

1

360 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

7

5760 0 0 0 1

5760 0 0 0 7

576

1

576

1

36

1

18

1

12

1

9

5

36

7

48

5

36

65

576

1

12

1

18

1

960 1

576

1

2880 0 1

5760 0 0 0 0 0 0 0 0 0 1

576

7

5760 0 0 0 1

2880 0 0 0 7

576

1

2880 1

36

1

18

1

12

1

9

35

288

1

6

41

288

1

9

1

12

1

48

1

36

1

288

1

1440 0 1

2880 0 0 0 0 0 0 0 0 0 1

288

7

5760 0 0 0 1

1920 0 0 0 7

576

1

1920 0 1

36

1

18

1

12

7

72

5

36

11

64

5

36

1

9

1

32

1

18

19

576

1

960 0 1

1920 0 0 0 0 0 0 0 0 0 1

192

7

5760 0 0 0 1

1440 0 0 0 7

576

1

1440 0 0 1

36

1

18

7

96

1

9

7

48

1

6

5

36

1

24

1

12

1

16

1

240 0 1

1440 0 0 0 0 0 0 0 0 0 1

144

7

5760 0 0 0 5

5760 0 0 0 7

576

5

5760 0 0 0 1

36

7

144

1

12

23

192

5

36

1

6

5

96

1

9

53

576

7

96

1

360 5

5760 0 0 0 0 0 0 0 0 0 5

576

7

5760 0 0 0 1

960 0 0 0 7

576

1

960 0 0 0 0 7

288

1

18

3

32

1

9

5

36

1

16

5

36

35

288

5

48

1

18

1

36

1

960 0 0 0 0 0 0 0 0 0 1

96

5

5760 0 0 0 5

5760 0 0 0 5

576

5

5760 0 0 0 0 0 1

36

37

576

1

12

1

9

5

96

1

6

85

576

37

288

1

12

1

18

7

1920 0 0 0 0 0 0 0 0 0 5

576

1

1440 0 0 0 1

1440 0 0 0 1

144

1

1440 0 0 0 0 0 0 5

144

1

18

1

12

1

24

5

36

25

144

11

72

1

9

1

12

1

16

1

360 0 0 0 0 0 0 0 0 1

144

1

1920 0 0 0 1

1920 0 0 0 19

576

1

1920 0 0 0 0 0 0 1

192

1

36

1

18

1

32

1

9

83

576

17

96

5

36

1

9

17

192

1

180 0 0 0 0 0 0 0 0 1

192

1

2880 0 0 0 1

2880 0 0 0 17

288

1

2880 0 0 0 0 0 0 1

2880 1

36

1

48

1

12

11

96

7

48

1

6

5

36

11

96

1

120 1

360 0 0 0 0 0 0 1

288288 288 288 288 288 36 48 12 96 48 6 36 96 12 36 288

1

5760 0 0 0 1

5760 0 0 0 49

576

1

5760 0 0 0 0 0 0 1

5760 0 1

96

1

18

49

576

11

96

5

36

1

6

9

64

1

90 1

18

1

360 0 0 0 0 0 1

576

1

5760 0 0 0 0 0 0 0 0 65

5760 0 0 0 0 0 0 0 0 0 0 0 1

36

1

18

1

12

1

9

5

36

1

6

5

360 1

12

1

18

7

2880 0 0 0 0 0

1

2880 0 0 0 0 0 0 0 0 41

2880 0 0 0 0 0 0 0 0 0 0 0 0 1

36

1

18

1

12

1

9

5

36

1

60 1

9

1

12

7

144

1

360 0 0 0 0

1

1920 0 0 0 0 0 0 0 0 11

640 0 0 0 0 0 0 0 0 0 0 0 0 0 1

36

1

18

1

12

1

9

5

360 5

36

1

9

7

96

1

18

1

360 0 0 0

9

10240 0 0 0 1

1920 0 0 0 1361

9216

1

576

1

5760 0 0 0 0 0 0 0 0 0 0 1

5760 1

36

1

18

1

12

1

90 1

6

5

36

455

4608

1

12

1

18

1

960 0 1

576

19

15360 0 0 0 1

960 0 0 0 569

4608

1

288

1

2880 0 0 0 0 0 0 0 0 0 0 1

2880 0 1

36

1

18

1

120 5

36

1

6

287

2304

1

9

1

12

1

48

1

360 1

288

49

30720 0 0 0 1

640 0 0 0 305

3072

1

192

1

1920 0 0 0 0 0 0 0 0 0 0 1

1920 0 0 1

36

1

180 1

9

5

36

77

512

5

36

1

9

1

32

1

18

1

36

1

192

37

23040 0 0 0 1

480 0 0 0 55

768

1

144

1

1440 0 0 0 0 0 0 0 0 0 0 1

1440 0 0 0 1

360 1

12

1

9

49

384

1

6

5

36

1

24

1

12

1

18

5

144

45

10240 0 0 0 5

1920 0 0 0 45

1024

5

576

5

5760 0 0 0 0 0 0 0 0 0 0 5

5760 0 0 0 0 0 1

18

1

12

161

1536

5

36

1

6

5

96

1

9

1

12

37

576

331

4608

1

360 0 0 1

320 0 0 0 25

1536

1

96

1

960 0 0 0 0 0 0 0 0 0 0 1

960 0 0 0 0 0 1

36

1

18

21

256

1

9

5

36

1

16

5

36

1

9

3

32

901

9216

1

18

7

2880 0 5

1920 0 0 0 133

9216

5

576

5

5760 0 0 0 0 0 0 0 0 0 0 5

5760 0 0 0 0 0 0 1

36

259

4608

1

12

1

9

5

96

1

6

5

36

23

192

95

768

1

12

7

144

1

360 1

480 0 0 0 29

2304

1

144

1

1440 0 0 0 0 0 0 0 0 0 0 1

1440 0 0 0 0 0 0 0 35

1152

1

18

1

12

1

24

5

36

1

6

7

48

1379

9216

1

9

7

96

1

18

1

36

1

640 0 0 0 11

1024

1

192

1

1920 0 0 0 0 0 0 0 0 0 0 1

1920 0 0 0 0 0 0 0 7

1536

1

36

1

18

1

32

1

9

5

36

11

64

817

4608

5

36

7

72

1

12

1

18

11

2880 0 0 0 49

4608

1

288

1

2880 0 0 0 0 0 0 0 0 0 0 1

2880 0 0 0 0 0 0 0 7

23040 1

36

1

48

1

12

1

9

41

288

153

1024

1

6

35

288

1

9

1

12

35

576

1

360 0 0 97

9216

1

576

1

5760 0 0 0 0 0 0 0 0 0 0 1

5760 0 0 0 0 0 0 0 7

46080 0 1

96

1

18

1

12

65

576

71

576

5

36

7

48

5

36

65

576

49

576

1

18

1

960 0 7

576

1

576

1

5760 0 1

2880 0 0 0 0 0 0 0 1

5760 0 0 0 0 0 0 0 0 0 0 0 1

36

1

18

49

576

55

576

1

9

35

288

1

6

41

288

11

96

1

12

1

48

1

360 7

576

1

288

1

2880 0 1

1440 0 0 0 0 0 0 0 1

2880 0 0 0 0 0 0 0 0 0 0 0 0 1

36

17

288

13

192

1

12

7

72

5

36

11

64

83

576

1

9

1

32

1

18

1

36

7

576

1

192

1

1920 0 1

960 0 0 0 0 0 0 0 1

1920 0 0 0 0 0 0 0 0 0 0 0 0 0 19

576

23

576

1

18

7

96

1

9

7

48

25

144

5

36

1

24

1

12

1

18

23

576

1

144

1

1440 0 1

720 0 0 0 0 0 0 0 1

1440 0 0 0 0 0 0 0 0 0 0 0 0 0 1

144

7

576

1

36

7

144

1

12

23

192

85

576

1

6

5

96

1

9

1

12

13

192

7

192

5

5760 0 5

2880 0 0 0 0 0 0 0 5

5760 0 0 0 0 0 0 0 0 0 0 0 0 0 5

576

P=

Page 34: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Limiting Probabilities of Approximate Monopoly

Jail

Illinois Avenue

Limiting probabilities given belowand plotted to the right (rescaled).

The results below are close to values found empirically, e.g.,Truman Collins’ results based on

{0.03114,0.02152,0.019,0.02186,0.02351,0.02993,0.02285,0.00876,0.02347,0.02331,0.05896,0.02736,0.02627,0.02386,0.02467,0.02919,0.02777,0.02572,0.02917,0.03071,0.02875,0.0283,0.01048,0.02739,0.03188,0.03064,0.02707,0.02679,0.02811,0.02591,0,0.02687,0.02634,0.02377,0.0251,0.02446,0.00872,0.02202,0.02193,0.02647}

Start

Illinois AvenueTruman Collins’ results based on 32 billion rolls of the dice (differences are less than 1%).

http://www.tkcs-collins.com/truman/monopoly/monopoly.shtml

Roger Bilisoly 4-08

Page 35: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Three Doubles in a Row Puts You in Jail

• This can be handled by using three states for each square. For example, let 1, 2 and 3 stand for Go. – 1 means no prior doubles, – 2 means one prior double and – 3 means 2 prior doubles.

• If a player is in state 3 and rolls a double, then he or she goes to jail. States 1 and 2 have the same probabilities as computed earlier.States 1 and 2 have the same probabilities as computed earlier.

• This has been done, e.g., “Monopoly as a Markov Process” by Robert Ash and Richard Bishop, Mathematics Magazine, 45(1), Jan., 1972, 26-29.

• Alternative method: The probability of 3 doubles in a row is 1/216, which can be used to approximate this happening. This is done in “Take a Walk on the Boardwalk” by Stephen Abbott and Matt Richey, The College Mathematics Journal, 28(3), May, 1997, 162-171.

Roger Bilisoly 4-08

Page 36: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Mathematica: Create P and Compute S

tranMatrixLinear1[n_,probs_]:=Module[{p={},r,i},

Do[AppendTo[p,{Table[0,{i,1,r-1}],

probs[[1;;Min[n-r+1,Length[probs] ] ]],

{Table[0,{i,r+Length[probs],n}]}}//Flatten],

{r,1,n}]

Do[p[[r,n]] = 1 - Fold[Plus,0,p[[r,1;;n-1]] ],{r,1,n}];

Return[p]

]]

n=10;

p=tranMatrixLinear1[n,{0,1,1}/2];

pt=p[[1;;n-1,1;;n-1]];

s=Inverse[IdentityMatrix[n-1]-pt];

s//MatrixForm//N1. 0.5 0.75 0.625 0.6875 0.65625 0.671875 0.664063 0.667969

0. 1. 0.5 0.75 0.625 0.6875 0.65625 0.671875 0.664063

0. 0. 1. 0.5 0.75 0.625 0.6875 0.65625 0.671875

0. 0. 0. 1. 0.5 0.75 0.625 0.6875 0.65625

0. 0. 0. 0. 1. 0.5 0.75 0.625 0.6875

0. 0. 0. 0. 0. 1. 0.5 0.75 0.625

0. 0. 0. 0. 0. 0. 1. 0.5 0.75

0. 0. 0. 0. 0. 0. 0. 1. 0.5

0. 0. 0. 0. 0. 0. 0. 0. 1.Roger Bilisoly 4-08

Page 37: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Mathematica: Create and Compute Limiting Probabilities

tranMatrixLoop1[n_,probs_]:=

Module[{c,r,p={},row},

row={probs,{Table[0,{i,Length[probs]+1,n}]}}//Flatten;

Do[row=RotateRight[row];

AppendTo[p,row],{i,1,n}];

Return[p]

]]

n=40; limit=n/4+1;

p=tranMatrixLoop1[n,{0,1,2,3,4,5,6,5,4,3,2,1}/36];

p[[All,11]]+=p[[All,31]]; (* Go to Jail *)

p[[All,31]]=Table[0,{40}];

out=Eigensystem[p//Transpose//N];

pi=out[[2,1]]/Fold[Plus,0,out[[2,1]]];

Round[pi,0.0001]

{0.0229,0.0231,0.0233,0.0236,0.0232,0.023,0.0229,0.0229,0.023,0.0231,

0.05,0.0231,0.0239,0.0246,0.0253,0.0261,0.027,0.028,0.0276,0.0273,

0.0271,0.0269,0.0267,0.0264,0.0268,0.027,0.0271,0.0271,0.027,0.0269,

0,0.0269,0.0261,0.0254,0.0247,0.0239,0.023,0.022,0.0224,0.0227}

Roger Bilisoly 4-08

Page 38: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Mathematica: GraphicspiMatrix = Table[Table[1,{limit}],{limit}];

piMatrix[[1]] = pi[[1;;limit]];

Do[piMatrix[[i,1]] = pi[[n-i+2]];

piMatrix[[i,limit]] = pi[[limit+i-1]],

{i,2,limit-1}]

piMatrix[[limit]] = pi[[n/2+1;;3 n/4+1]]//Reverse;

g1 = Graphics[Raster[piMatrix //Transpose] ];

g2 = ListLinePlot[gridLoop[n],

PlotStyle->Directive[Black,Thick],AspectRatio->1/n];

Show[g1,g2]

Raster[] takes a matrix and

makes a 2D grid. However, the resulting image is flipped about the x-axis. This is reflected in the construction of piMatrix.

Roger Bilisoly 4-08

Page 39: Markov Chains and Boardgames like Monopoly with Mathematica 4-5-2008

Further Readings

• Abbott, Steve and Matt Richey. 1997. Take a Walk on the Boardwalk. The College Mathematics Journal. 28(3): 162-171.

• Althoen, S. C., L. King, and K. Schilling. 1993. How Long is a Game of Snakes and Ladders'? Mathematical Gazette. 77: 71-76.

• Ash, Robert and Richard Bishop. 1972. Monopoly as a Markov Process. Mathematics Magazine. 45: 26-29.

• Bewersdorff, Jörg, 2005. Luck, Logic, and White Lies: The Mathematics of Games. A. K. Peters, Ltd. Chapter 16 analyzes Monopoly with Markov Chains.

• Diaconis, Persi and Rick Durrett, 2000. Chutes and Ladders in Markov Chains, Technical Report 2000-20, Department of Statistics, Stanford University.

• Dirks, Robert. 1999. Hi Ho! Cherry-0, Markov Chains, and Mathematica. Stats. Spring (25): 23-27.

• Johnson, Roger W. 2003. Using Games to Teach Markov Chains. Primus. http://findarticles.com/p/articles/mi_qa3997/is_200312/ai_n9338086/print

• Gadbois, Steve. 1993. Mr. Markov Plays Chutes and Ladders. The UMAP Journal. 14(1): 31-38.• Murrell, Paul. 1999. The Statistics of Monopoly. Chance. 12(4): 36-40.• Stewart, Ian. 1996. How Fair is Monopoly! Scientific American. 274(4): 104-105.• Stewart, Ian. 1996. Monopoly Revisited. Scientific American. 275(4): 116-119.• Tan, Baris,. 1997. Markov Chains and the Risk Board Game. Mathematics Magazine. 70(5):

349-357.

Roger Bilisoly 4-08


Recommended