Thursday, May 2, 2013

More seconds?


In the last blog post, in our attempt to find the probability that a player going second finishes second in a game with four players, we found that the probability that the player going first finishes first and the player going second finishes second (in a game with four players) is equal to the probability that a player going first finishes first in a game with four players times the probability that a player going first finishes first in a game with three players. Of course, it's also possible for the players going third or fourth to finish in first and the player going second to finish second. Since the player going in first is still in the game if this happens, the second condition then changes to F(2,3,1), since the player then has the second position.

The total probability that the player going second finishes the game in second place is therefore:

F(1,4,1)*F(1,3,1) + F(3,4,1)*F(2,3,1) + F(4,4,1)*F(2,3,1)

Now we can take a step closer towards a generalization. The probability that a player going in position n gets second place in a game with p players is:

F(1,p,1)*F(n-1,p-1,1) + F(2,p,1)*F(n-1,p-1,1) + ... + F(n-1,p,1)*F(n-1,p-1,1) +   F(n+1,p,1)*F(n,p-1,1) + ... + F(p,p,1)*F(n,p-1,1)

Let's actually try this out. I wrote a computer program that performs this calculation for a given n and p. Letting p be equal to four and varying n between the integers between 1 and 4 (inclusive), we get:

When n = 1: 0.2537844586561435
When n = 2: 0.2513862054039849
When n = 3: 0.24879532739096866
When n = 4: 0.24603400854886412

These values, when added, sum to 1, which is at least a somewhat decent verification that they are correct (more precisely, it is a necessary but not sufficient condition to it being correct). Interestingly the first player still has an edge over the other players.

More to come on this later, but this seems like a good place to stop for now.

Friday, April 26, 2013

Seconds, please?


A few posts ago I showed how we could find the probability of a player in position n in a game with p players finishing in first place. Now we can try and make that more general: what is the probability of a player in position n in a game with p players finishing in mth place?

Before we generalize it that far, however, let's first think about it for the case of the player getting second place. Let's suppose we want to find the probability that the player who goes second (in position two) in a game with four players finishes in second. What conditions have to be met for this to happen? Clearly, if the player is getting second, someone else is getting first, so one of the things that must happen is that someone besides that player wins the game. Let's consider what happens if the player going first gets first place. If we already know the player going first is going to win, then the chance that the player going second finishes in second in a game with four people is the same as the probability that a player going first in a game with three moves finishes in first place. Thus, the player going first gets first place and the player going second gets second place is equal to:

(Probability that player going first wins in a game with 4 players) * (Probability that a player going first wins in a game with three players)

This is getting pretty wordy, so let's invent some notation. Let's call F(n,p,m) the probability that a player in position n in a game with p players finishes in m place. The above expression can then be condensed to:

F(1,4,1)*F(1,3,1)

(Remember that we already found an algorithm that gives the probability of a player in position n in a game with p players finishing in first.)

         This has been pretty dense so far, so I think I'll stop here and continue on in the next post.

Thursday, April 25, 2013

Chutes and Ladders, the real deal (part 2)

    Continuing on with the quest to find real-life applications for Chutes and Ladders, we take a look again at the directed graph mentioned in the previous past. As the directed graph for the real game of Chutes and Ladders is essentially no more than a larger version of this one, we should be good to stop here. Now that we have a rigorous basis to comparing other processes to Chutes and Ladders, the next step requires some creativity: we have to think of a real situation that is analogous to Chutes and Ladders based on a comparison of directed graphs. Of course reality is far more complex than Chutes and Ladders is, so finding a perfect analogy is next to impossible, but something close is better than nothing. Back in one of the first posts bringing up applications of graph theory, the process of a forest fire spreading was discussed. Let's see what we can do with that.
   
    Just as Chutes and Ladders progressed along randomly based on dice throws, a forest fire progresses randomly with time. Thus, while a dice throw represented one "iteration" in the Chutes and Ladders Markov Chain, some unit of time, say, a minute, represents an iteration in the forest fire Markov Chain. How might we define our states in this Markov Chain? With Chutes and Ladders, we defined the states based on what square a player was positioned on -- a player on the thirty-fifth square was said to be in the thirty fifth state. However, since a fire can be on multiple trees at once (while a Chutes and Ladders player can only be on one square at a time), it is more sensible to define the states based on the number of trees that have been set ablaze -- if thirty trees are on fire, the Markov Chain will be in state thirty. Moreover, the last state, just like in Chutes and Ladders, is an absorbing state. We do, however, have to assume that the fire travels in a fixed path and that the number of trees than can burn in the time interval is fixed (just like how you always spin between a 1 and a 6 in Chutes and Ladders), but so far this seems to have some potential. Now for the deal breaker: what is the analogy to chutes and ladders? Chutes are a bit difficult, since forest fires don't normally just fizz out when a certain number of trees are burned. However, a "ladder" might be possible if a group of trees are clustered so tightly together that when one burns, the others immediately burn too. The Markov Chain is instantly carried from one state to a higher state, just as what happens when a player lands on a ladder in Chutes and Ladders.
   
    The beginning of the directed graph, assuming that the maximum number of trees that can be burned in one time interval is three and the minimum is one, might look something like this...

   
       

    ...If the "forest" looked something like this:








    (Note that under this model, "red lines" are impossible. Yellow lines will appear if we decrease the number of trees that can be burned in one time interval down to zero).

    Although this forest fire example still seems a bit contrived, I think the similarities it does share with Chutes and Ladders are quite remarkable. With only a very small amount of tweaking, we can get a lot of information about this process with the tools already developed for analyzing Chutes and Ladders.

Friday, April 19, 2013

Chutes and Ladders, the real deal

    Now that the project is nearing an end, I thought it would be a good time to branch out from the Chutes and Ladders world and start looking in-depth at possible applications for the results I've obtained. Markov Chains are used all over the place to model real-world processes, but finding situations which are analagous to moving along a Chutes and Ladders board is a bit more tricky, especially because chutes and ladders make the movement a lot more irregular.

    One useful first step is to remove all of the "fluff" in the game and break it down to its simplest mathematical form. For processes that are Markov Chains, this can be done by representing them as "directed graphs". A graph, in this context, refers to a series of points connected by a series of edges. You can draw two points on a piece of paper, connect them with a line, and bam, you have a graph. Straightforward, right? Perhaps you can already see why graphs are so useful for simplifying complicated processes. In a directed graph, each of these points corresponds with a state in a Markov Chain, and an edge connecting two points signifies that there is a non-zero probability of going between the corresponding states. However, since the probability of going to state B from state A isn't necessarily the same as the probability of going to state A from state B, it is also necessary to specify which state is the "start" state and which state is the "end" state, and this is conventionally done by placing an arrow on the edge that points in the direction of the end state. Thus, there may be two edges between two vertices A and B: one that starts on A and ends on B and one that starts on B and ends on A.

    Here's an example of a directed graph for the "simplified" game of Chutes and Ladders introduced a few months ago:

    The simplified game (the arrow represents a chute)

     

    A rather crude directed graph. Putting in arrows made the image too cluttered, so here a green line signifies that it starts on the square furthest to the left between the two, a yellow line signifies that it starts and ends on the same square, and a red line signifies that it starts on the square furthest to the right between the two.

    Note that here the square on which the top of the chute is located is not represented by a vertex in the directed graph. It is what we might call a "pseudo-state" rather than a real state, since it is never possible for a player to be located on that state at the end of his or her turn. This square instead influences the graph by increasing the probability that a player lands on the square on which the end of the chute is located.
   
    More juicy details to come in the next post. Stay tuned.

Thursday, April 11, 2013

More on order


In the last post, I listed the chance that the player going first wins in a game with 2 players. What happens, however, when the number of players in the game blows up, say to a million? The probability comes out to be close to 0.0015, which is just about equal to the probability of one player finishing the game in exactly 7 moves. This actually makes sense since if the player going first in a game with a very large number of players doesn't win in the shortest number of moves possible, chances are that someone else will. Thus, the chance that the first player wins is very close to the probability that he or she finishes in the shortest number of moves possible.

Let's switch gears for a moment and now consider the unlucky fellow who has to go last in the game. How does this change the math? This player, in order to win, cannot tie with anyone else, so all other players must finish in more than y moves. The probability of the player in last winning in y moves thus translates mathematically to:

Py * (1 - (P7 + P8 + P8 + ... + P(y-1) + Py)^(p-1)

Note that the only difference between this and the case with the first player is that this time, there is an additional term, Py, enclosed in the parenthetical expression.

Testing this out for the case when there are two players, we get that the probability of the second player winning is equal to 49.2, which is expected based on the probability we found of the first player winning (50.8). As the number of players gets larger and larger, the probability of the last player winning gets closer and closer to 0. When there are a million players, the probability of the last player winning is smaller than 1.0 * 10^-500. It is more likely that someone wins the Mega Million jackpot fifty times in a row than this happens.

Now that we have these two extreme cases down (first and last), let's try to be a bit more general: what is the probability of a player in order k in game with p players winning? We know that there are (p - k) players who this player trumps and ( k - 1) players who trump this player. Thus, this player can finish in the same number of moves as (p - k) players, but he or she must finish in a fewer number of moves than (k - 1) of the players. Thus, the probability of the player with order k winning is equal to:

        Py * (1 - (P7 + P8 + P8 + ... + P(y-1) + Py)^(k-1) * (1 - (P7 + P8 + P8 + ... + P(y-1))^(p-k)

Note that when k = 1 (player goes first) or k = p (player goes last), this turns into one of the two expressions discussed before.

In a game with four players, the probabilities of winning, in order from the player going first to the player going last, are as follows:

0.26007883
0.25322465
0.24657453
        0.24012197

        These add up almost exactly to 1, so our math is probably good.

Wednesday, April 10, 2013

Follow the leader...


        "First is the worst, second is the best,...". Order is clearly important to children. I'm sure we can all remember that tingly feeling of smug satisfaction we got when we were the line leaders for the day. However, does order really matter in a game of Chutes and Ladders? What statistical advantage does the first player have over the last?

In trying to break this down mathematically, let's first try to imagine a game of Chutes and Ladders where the game doesn't stop as soon as one player reaches the finish but continues until all players arrive there, and each player takes note of how many moves it took him or her to reach the end. Let's also imagine that, for now, we have three players playing the game: Alec, Betty, and Carl.

Suppose Alec finishes in 34 moves, Betty finishes in 42 moves, and Carl finishes in 57 moves. If this had been a "normal" game of Chutes and Ladders, order wouldn't have mattered, as Alec would have been the winner no matter what order the players made their moves in. Now suppose instead that Alec finishes in 45 moves, Betty finishes in 41 moves, and Carl finishes in 41 moves. This time, order clearly matters: if Carl had started before Betty, he would be the winner, but if Betty went before Carl, she would be the winner. From this we can see that order is really a way of "resolving ties": the first position trumps all other position, the second position trumps all positions but the first position, and this continues on and on until the last position, which is trumped by all other positions.

So how can we define this probability mathematically? Let's start with considering the probability that the player going first wins in a game with p players. Well, the best way to begin is to "partition" this probabilistic event (that the player going first wins) into a bunch of smaller events. The most natural way to do this is to break it up as follows: add the probability that the first player finishes in 7 moves (since it is impossible to finish in any fewer) to the probability that the first player finishes in 8 moves to the probability that the first player finishes in 9 moves, etc. Since there is no upper limit to how many moves a game of Chutes and Ladders must be finished in as there is a lower limit (7), we would have to go all the way to infinity to get a full partition, but since this is a practical impossibility, we will instead settle for a close approximation (as we have done before).

Understanding this partition business might be best done with the analogy of a cake. If the entire cake represents the probability that the first player wins the game, then a slice might represent the probability that first player wins in, say, 20 moves. The idea behind the "approximation" method can be similarly understood if we imagine that the cake is cut and served to a large number of guests so that each guests receives exactly one-half of the remaining cake. The first guest receives one-half of the total cake, the second guest receives one-fourth of the total cake, the third guests receives one-eighth, and so on. If we put the pieces given to the first twenty guests together, we would nearly, but not exactly, have the whole thing (bear in mind though that the probabilities we're dealing with aren't distributed in this fashion).

Thus, we can represent the probability as follows:

Approximate probability that the first player wins: P(1,7) + P(1,8) + P(1,9) + ... + P(1,n), where P(x,y) is the probability that a player in position x finishes in y moves and where n is some large number (in this case I'm using 266).

Let's think about what has to happen for P(1,y), 1<=y<=n, to be satisfied. Clearly, one of the conditions is that the first player has to finish in y moves. Remember those variables we defined in the last two posts? Those are probably going to be useful again. We call this condition Py, and, if y is between 1 and 266 (which we assume must be true), we can find Py from the information we obtained using the transition matrix of the Chutes and Ladders Markov Chain. Good. What about the other conditions? Clearly the other players can't win in fewer than y moves for the first player to win, but what happens if they finish in the same number of moves? As we already discussed, the first player's position trumps all the other positions, so even if one of the other players finishes in y moves, the first player still wins. Thus, we can represent the probability that one of the other (p-1) players finishes in y or more moves as follows:

        (1 - (P7 + P8 + P9 + ... + P(y-1))^(p-1)

        (A very similar expression was used in the previous post, so you can see that for a proper explanation of why this works, but the essence is that  we take the compliment of the probability that one player wins in (y-1) moves or fewer and raise that to the (p-1) power since this must happen (p-1) times independently).

Since these two conditions are the two conditions that must be met for the first player to win in y moves, this probability is given by their product:

Py * (1 - (P7 + P8 + P9 + ... + P(y-1))^(p-1)

        Summing hundreds of "P(1,y)"s together is, of course, most suited for a computer. The result, when we let the number of players be equal to 2, is approximately equal to 50.8, giving this player a total edge of about 1.6% over the second player. This is not a trivial advantage (it is higher than the typical edge that a card counter has over a dealer in Blackjack), but it is probably not worth fighting over either.

Wednesday, April 3, 2013

Much more multiplayer madness


So, in the last post we found the probability of at least one player out of p players finishing the game in exactly n moves, which we called Pa~n, to be 1 - (1 - Pn)^p, where Pn is the probability of one player finishing the game in n moves. However, to determine the probability of a game with p players finishing in n moves, we must still find the probability that nobody finishes the game in fewer than n moves.

We know that the probability that one player finishes in fewer than n moves is given by: P1 + P2 + ... + P(n-1). The probability that one player does not finish in fewer than n moves is therefore (1 - P+ P2 + ... + P(n-1)). Let us suppose that only one player wins (under the definition of "number of moves" I'm using, it is possible for more than one player to "win"), meaning that one and only one player finishes the game in exactly n moves. This makes it impossible for that one player to win in fewer than n moves, but it's still fair game for the remaining p-1 players. That means the probability that nobody finishes the game in fewer than n moves is: (1 - P+ P2 + ... + P(n-1))^(p-1)

Pa~n * Pb~n, or the probability that a game with p players finishes with exactly n moves, is therefore given by the expression:

(1 - (1 - (Pn)^p) * ((1 - P+ P2 + ... + P(n-1))^(p-1)

Knowing this, we can estimate the expected number of moves as follows:

Expected number of moves ~= (1)*(Pa~1 * Pa~1) + (2)*(Pa~2 * Pa~2) + ... + (n)*(Pa~n * Pa~n)

Where n is some large number.

(See the post "Great Expectations" for an explanation of why this works).

I already found Pn with a program for n up to about 250 (which is sufficiently large for our purposes), so here's the estimated expected number of moves for the first few values of p:


p = 1 --> 39.2
p = 2 --> 26.5
p = 3 --> 21.9
p = 4 --> 19.5


Just to make sure I didn't make a mistake (not a rare occurrence), I decided to instead modify the simulation program I wrote to allow for more than one player and calculated the expected number of moves based on the average of a large number of games. Here are the fist few results from that:

p = 1 --> 39.2
p = 2 -- > 26.3
p = 3 --> 21.7
p = 4 --> 19.3

They're close, but there's still clearly a slight error. I'll look into this more, but I suspect it has something to do with the assumption that only one player wins.

Here's an Excl graph showing the results (number of players on the horizontal axis, expected of number of moves on the vertical axis):


As more and more players are added, the expected number of moves of the winner gets closer and closer to 7, since 7 is the smallest possible number of moves.

Now that this approach was done, is the Markov strategy worth considering again? I think the only way that could be done would be to build a transition matrix around the moves of the winning player (for example, the winning player is probably more likely to spin larger numbers and land on big ladders), but this seems to be more trouble than it's really worth.