In the last post, I mentioned that the process of advancing along from square to square in Chutes and Ladders was really a Markov Chain. So what exactly does that mean? A Markov Chain is defined to be (bear with me) a process consisting of a finite number of states where the probability of moving from one state to another depends only on the current state. Like most concepts in math, this is probably best understood with an example. We can consider a very simplified model of the weather to be a Markov Chain. Suppose that the weather can only be sunny or rainy, and if it's sunny one day, there's a 90 percent chance it will be sunny the next day, and if it's rainy one day, there's a 40 percent chance it will be rainy the next day. We have two different states ("rainy" and "sunny"), and the probability of switching between these two states only depends on the current state (if the weather was rainy-sunny-rainy, the chance of it being rainy on the fourth day only depends on the weather on the third day, not on the first or second day's weather). For Chutes and Ladders, we can think of each square as a different state, and the spinner as a "transitioner" between states. Since the outcome of the spinner does not depend on what square you're on (there are always 6 options with a 1/6th chance of landing on each option), Chutes and Ladders seems to pass the Markov chain test.
It may seem cruel to reduce a children's game down to a "Markov chain", but this is actually very useful. Let's momentarily return back to the weather example to see why. Suppose it is sunny today, and we want to find the probability that it is sunny two days from now. For this to happen, the weather can either be sunny-sunny or rainy-sunny over the next two days. The probability of the first one occurring is (0.9)*(0.9)*100 = 81% and the probability of the second one occurring is (0.1)(0.6)*100 = 6%, so the total chance is 81% + 6% = 87%. A few observant mathematicians noticed that these sorts of calculations looked a lot like matrix multiplication. In fact, if we have a Markov chain with n different states, we can construct what is known as a "transition matrix" where the (i,j)th entry in the matrix corresponds with the probability of transitioning from state i to state j. For the weather example, the transition matrix would look like this:
The entry in the top left corresponds with the chance of transitioning from "sunny" to "sunny", the entry in the top right corresponds with the chance of transitioning from "sunny" to "rainy". If we square this matrix, we get:
Notice that in the top left corner, we have the probability 0.87, which is the value we found previously. In our transition matrix, the value in the top left corner was the probability of going from "sunny to sunny" in one change of state. When we square the transition matrix, the value in the top left corner is the probability of going from "sunny to sunny" in two changes of state. In general, if M is the transition matrix of a Markov chain, then the (i,j)th entry in the matrix M^n corresponds with the probability of transitioning from state i to state j in n changes of state.
In Chutes and Ladders, we can use this to find the probability of going from "state 1" (the first square) to "state 100" (the last square) in a certain number of moves. Of course, since there are 100 spaces in Chutes and Ladders, the transition matrix is going to be 100 by 100, so it is no simple task to construct it. More to come on this later.
Joey, that was a beautiful explanation. How did you make the matrices? You are a natural born teacher.
ReplyDelete