Confusion on Risk of Ruin from MoP and relating it to absorbing states of Markov Chains
Posted by mitchr1598
Posted by
mitchr1598
posted in
Gen. Poker
Confusion on Risk of Ruin from MoP and relating it to absorbing states of Markov Chains
I'm part way through reading Mathematics of Poker and I'm currently on the first chapter of Part IV (on risk of ruin). It states that for games with positive expected value the risk of ruin is < 1.
Now this confused me when I thought about a theorem on absorbing states of Markov Chains I learnt last semester at University. I couldn't find my notes but I found the theorem on some other uni's notes. It's on page 9 of these notes https://math.dartmouth.edu/archive/m20x06/public_html/Lecture14.pdf. It states that for all markov chains where the probability of being absorbed is > 0 for all states, the probability the process will be absorbed is 1.
These two statements obviously contradict each other. So I assume there's something obvious that I'm missing, but I fail to see the difference between the problem of bankroll management and markov chains. I would have thought that our problem of bankroll management is just a markov chain with an absorbing state at 0.
Thanks for anyone who can help clarifying what I'm not understanding
Loading 7 Comments...
I don't know if I can explain the perceived contradiction immediately, but I'm intrigued that someone (finally) brings up Markov chains in relation to poker. This is something I'd be interested exploring further, though I've had to do it on my own so far.
Incidentally, Ankenman (co-author of MoP) has made a series of videos on another site that expands a bit on risk of ruin and the concept of bankroll half-life. Have you seen that series by any chance?
Ideally, he's the one you should ask. I don't know that he posts on here, but he does post on 2+2 (or at least used to).
Actually, thinking more about it, it must of course be the case that any finite bankroll will disappear with certainty under the assumption of infinite hands played. And that assumption is built in to the theorem for absorbing Markov chains, since it has n goes to infinity. But n is not infinite in practice. We deal with finite numbers of hands and finite bankrolls, and if I remember correctly (without checking), this was the observation they made in MoP as well. So they ended up with the alternative approach of bankroll half-life, that presumably models reality better, even though both models would be correct in some theoretical sense.
Is it possible that the statement in MoP was made assuming infinite bankrolls and infinite hands? In that context it would seem correct.
Does that make sense to you?
Yeah the statement in MoP was made assuming infinite bankrolls and infinite hands.
Does the theorem on Markov Chains only apply to finite state spaces? Or does it apply to infinite state spaces as well. Wherever I looked online it never said anything about it only applying to finite state spaces but most of the discussion was only on finite state spaces
And no, I haven't seen his other video series. I'll check it out
It seems like it might apply, but I'm not really an authority on that. I've only really worked with discrete and finite time and state-space Markov models myself.
For practical purposes though, wouldn't you be more interested in things like time to absorption, or time variance? That seems to be what MoP is trying to model.
I think the difference is the markov chain in a poker bankroll has (hopefully) a probability of going up that is higher than the probability of going down. It moves away from the absorbing state and can go to infinity. If you always remove your profits from your bank roll higher than a value of say B though you will go bust with p=1 as now the chain is finite
Yeah, I've taken more advanced courses now and this theorem only applies to finite markov chains. When it's infinite with probability of winning greater than 1/2 then the probability of going bust is consistent with MoP
Be the first to add a comment