Machine Learning and C-Bet Frequencies
Posted by james
Posted by
james
posted in
Low Stakes
Machine Learning and C-Bet Frequencies
There's a great post that Reddit user Fossana created yesterday where they tried to figure out when a player in a BTN vs BB SRP situation could c-bet range for a third pot and what factors should most influence that decision. In order to come to those conclusions, they used Random Forrests and Decision Tree algorithms. Here's a quote from the post but you should really check it out if this stuff interests you:
After compiling all of this data in excel, I used machine learning libraries in python to see if machine learning could predict which flops you can cbet 1/3 100% without a large loss in EV (loss in EV < 1% pot). The basic idea behind machine learning is that you give an algorithm some training data (some subset of the 72 flops), and then it will come up with an equation that's as accurate as possible with its predictions in respect to the training data. Then you test the validity of the equation the algorithm came up with by seeing how well it does on unseen data (the flops that weren't used in training the algorithm). For this scenario, I used decision trees and random forests as the chosen algorithms. I won't go too much into what these algorithms do, but I think both of these algorithms are good for analyzing this situation because decision trees can give you heuristics to follow (e.g. cbet range on A8- flops), while random forests can give you a good idea of what predictor variables are most important.
The entire post can be found here.
Please leave your thoughts, should you have any, on their methodology or conclusions.
Loading 6 Comments...
I've been playing with my recently purchased gtoplus, and I noticed it's too easy to go down into a rabbit hole. I'm wondering how practical are all these deep analysis, especially considering we're playing against humans that can and will do most of the things different than the gto perspective.
The way I'm approaching gto study is to try to come up with conclusions that can be extrapolated into many situations as possible. What do you think about this approach?
Edit: An example of this oversimplification/generalization is to always cbet tpgk+ and any good draws.
Machine learning is capable of finding the heuristics you're looking for like always cbet tpgk+ and any good draws. In the final model, I found that you can cbet 1/3 100% BTNvsBB on these boards without major EV loss:
A9+ means a board with 1 Ace and at least one card 9-K.
A8- means a board with 1 Ace and two cards 8 or lower.
2BW+ means a flop with at least two cards 9 or higher.
1BW means flop with 1 card 9 or higher.
Rags is everything else.
But because this is using the simplest model, it will only predict flops that you can cbet range on correctly 80% of the time.
In the second model I went further than the simple model and identified the variables that were most important in determining whether you could cbet 1/3 100% without a large loss in EV or not. These are the results of that:
Predictor variable | Importance
IP Equity: 0.248376
Connectedness OOP: 0.191935
Nut advantage IP: 0.161676
Top pair advantage IP: 0.116107
unpaired % OOP: 0.088248
pair % OOP: 0.076904
m : 0.022818
2BW+ : 0.018598
Rags: 0.018452
r: 0.017319
tt: 0.014563
1BW: 0.013978
Paired: 0.008381
High card paired: 0.002644
Nut advantage IP is the absolute difference in % between how often IP has an overpair or better versus OOP.
Top pair advantage IP is the same as nut advantage IP, but only applies to top pair.
Unpaired % OOP is how often OOP has an unpaired hand without a draw.
m refers to a monotone flop.
Connectedness OOP is how often OOP has a straight draw (open-enders were given double weight).
pair % OOP is how often OOP has a pair.
m, r, and tt correspond to monotone, rainbow, and two-tone.
Paired refers to whether the flop is paired or not.
High card paired refers to whether the pair on a flop is below or above the other card.
This sort of information is more useful because it can be used for extrapolation. These are variables you can think about when deciding whether you want to cbet range or not in other situations such as COvsBB, or against a nitty opponent. It can also help you make up for the 20% inaccuracy of the first model by being able to analyze the specifics of the flop in terms of these variables. It also gives you insight into why the heuristics work at all.
More importantly, this approach with machine learning can be used for many things: predicting preferred bet size, identifying the most important variables when it comes to bet sizing, predicting spots where you can cbet 1/3 100% in 3bet pots, predicting spots where you should check 100% in single-raised pots OOP, etc.
In general, it's important to find principles and patterns that you can use to extrapolate and generalize, but at the same time, you want a deep understanding of them, or else you won't be able to adjust correctly. If there were no patterns in poker, piosolver wouldn't be a very useful tool.
Can this be used for heads up play? I see that the ranges used look slightly tighter than headsup ranges. Do you think that hinders me from doing this profitably in headsup?
As far as I know, cbetting 100% in heads-up is not as good as it is in 6max, because equities tend to run closer. Heads-up ranges are supposed to be much wider than 6max ranges (in 6max you open like 40-50% of buttons, in heads-up you open 80%+)
BTN edge in heads-up is smaller than BTN edge in 6max because in heads-up button is way wider (only one player behind instead of two)
That makes sense. Thanks:)
Really interesting analysis. However Fossana in your multiple bet sizing pio setup you have limited your OOP raise sizings to a single 62% choice. Moreover, you did not allow OOP to donk turns ever. I am afraid if you did not change this for the pure 3rd pot runs your results are not very reliable in a game theory optimal world. Both of these factors will greatly improve the EV of a pure 3rd pot strategy since IP will now be allowed to see cheap turns and rivers.
That being said, in the current metagame, turn leads and large flop raise sizing (and raise frequency in general) against small bets is heavily underused, so your results are probably still pretty useful. Additionally the general aggression principles you outline at the bottom of your post as a result of the analysis are certainly applicable, even if not specifically to pure bet strategies.
Cool work, Cheers.
Be the first to add a comment