"GTO PREFLOP RANGE"
Posted by SikBluffBruh
Posted by
SikBluffBruh
posted in
Low Stakes
"GTO PREFLOP RANGE"
I have a question about gto preflop range that everyone is talkin about today...(basically being pokersnowie's range)
Lets take that 6 max preflop range....How is that considered GTO? Isit considered GTO only when its being applied against completely identical ranges. Or is it considered GTO against ANY range?
Its an extremely tight range.
GTO means being unexploitable right, well if everyone is playing THAT tight a range at our table why cant we just loosen up to the point the table isnt defending our raises/and reraises enough be it preflop or postflop? Am I being naive that the numbers wont allow ME to loosen up to that point to be able to do that against a table of 5 others (since so many opponents) even though their ranges are that TIGHT?
Or, no, I am correct and players who loosen up just enough to the correct point could exploit them...Its just that-that pokersnowie range is a suggested GTO range because thats what generally the mass population at 6 max plays today???
(( Tho would seem hard to believe that the overwhelming majority of the reg population is playing that range all the way from low stakes to high stakes.))
What Am I Missing?
Thank you all as im really tryin to learn about solvers lately and will be purchasing one very very soon
Loading 43 Comments...
Pokersnowie preflop ranges are not GTO ranges, because snowie is not a solver.
Perfect nash equlibrium strategy would be one where nobody can do anything to increase their EV. If you could play looser range preflop and make more money, then you are not playing againts perfect strategies.
But I thought snowie's 6max preflop range was considered very close to preflop gto I should have stated so thats why it is suggested to go ahead and use.
If not why is it the suggested preflop range to use?...Especially if regulars already know the mass population regs use it and has exploitable holes in it because they are playing SO tight.
Also, What is the difference between nash equilibrium and gto/solved?
We don't quite now what the perfect preflop solution would look like. That said snowie preflop ranges are considered to be pretty good and in fact I often recommend them as a starting point for players who are asking advice for preflop.
You say that the snowie ranges are tight. I'm not sure if they are? Compared to what?
As far as I understand it, GTO (game theory optimal) basically means nash equilibrium, which is a perfect solution to a zero-sum game. The difference between the two might be that a GTO solution is whatever is the best strategy againts a given strategy, whereas nash equilibrium is just the end point of two or more players trying to maximize their EV.
" You say that the snowie ranges are tight. I'm not sure if they are? Compared to what? "
Compared to the past...I think this is the tightest times Ive ever seen these kind of starting ranges suggested. Hands Down
" That said snowie preflop ranges are considered to be pretty good and in fact I often recommend them as a starting point for players who are asking advice for preflop."
If snowie's range has nothing to do with even the thought of being close to GTO....Then what makes it so commonly suggestable?
And what would stop a player from playing a highly looser range/inputting it into a gto tree against a table of players whose range is pretty identical to snowies and just wrecking them since snowies suggestions are going to be to tight on how to counter?
That is probably because for the longest time, vast majority of players were super nitty with their big blind defence and 3betting, therefore the best strategy was to open really wide.
You make it sound like that I'm suggesting that snowie solutions are very far from actual GTO. I never said that, what I said is that snowie is not a solver, therefore the solution is not exactly GTO. That said I think they're pointing to a right direction.
I'm still not buying in to the idea that snowie ranges are so exploitatively tight that we could just go crazy againts them. Maybe you could provide an example of a snowie strategy that you think is highly exploitable preflop?
" I'm still not buying in to the idea that snowie ranges are so exploitatively tight that we could just go crazy againts them. Maybe you could provide an example of a snowie strategy that you think is highly exploitable preflop? "
Oh idk, take the example like belrio42 mentioned below. Where we make our range a range that's very* loose to where all 5 other players who are using a snowie like range are not comfortable/experienced on how to play against us and do not/fail to adjust. All the while weve already solved/studied how to play proper gto against their ranges (snowie).
Would that not exploit them heavily?
I can't really prove or disprove this statement, I don't have a preflop solver and even if I did, they aren't quite perfect as far as I know. We already established the fact that snowie is not a solver so the outputs aren't perfect, therefore they can be exploited to some degree. It just sounds like you're suggesting that the snowie ranges are VERY exploitative, which I'm not sure is the case.
But yeah, we would exploit them, although probably not "heavily", by studying a perfect GTO strategy againts a snowie strategy.
Can you explain what is Snowie if it is not a solver? thanks
"GTO poker is the scenario where both players are playing perfectly, and neither one can improve his strategy any further."
Poker Holdem No Limits is not GTO solved not even for heads-up although we know that for heads-up there is a GTO strategy. The best we can do now is to use a solver and approximate a poker strategy that performs close to optimal play under the conditions we choose in the solver such as (preflop ranges and) bet sizing's allowed etc.
1) Preflop ranges
The ranges differ from each position but everyone is supposed to play optimal that is the same ranges from the same position.
2) Exploit-ability
If everyone plays the perfect GTO strategy then there is per definition no strategy that can exploit the players that play a perfect GTO strategy.
3) What we miss
There is no known perfect GTO strategy and most solvers only consider heads-up play. So in fact we don't know what the GTO preflop ranges are. And if we knew human players are generally not capable of playing perfectly by e.g. only 3 betting 65.734% of the time with a certain holding from a certain position against a certain position against a certain raise size etc. We can only approximate optimal play and this means the human biases are still there.
Having said that the problem is though is that more and more players learn about GTO meaning that the average player is playing better and better and as there is rake poker will become more difficult to beat.
I don't know what you mean exactly by "cooperate" but if we know what their squeeze range (along with other players ranges in the pot+ranges+that theyre playing gto) is and that they are playing gto vs our range then we should be able to still find* a mathematical gto solution vs them at 6 max
"Once you’re in three-handed (or more) games, there is no game theory optimal solution, strictly speaking. This is because there is no stable equilibrium (or too many equilibria to count, depending on whom you ask). The players can always adjust to each other, or take advantage of a player trying to execute a GTO strategy and not adjusting to them, through a process that Bill Chen and Jerrod Ankenman call “implicit collusion” in their 2006 book The Mathematics of Poker. Thus there is no unexploitable strategy."
source PokerNews
I don't know if this still holds but I dont see any evidence that the above isn't still the situation.
" Poker Holdem No Limits is not GTO solved not even for heads-up although we know that for heads-up there is a GTO strategy. "
" And if we knew human players are generally not capable of playing perfectly by e.g. only 3 betting 65.734% of the time with a certain holding from a certain position against a certain position against a certain raise size etc. "
So its not really that it cant be solved for 6max, its that its simply unknown....and furthermore practically impossible to ever completely remember and apply in real time as a human...correct?/just to clarify
H'mm not even that.
For 6 Max there is no Game Theory Optimal solution as in 6 max the players can cooperate (e.g. squeezing is a way of cooperating without a need to actually make an agreement with another player) and if players cooperate there is no game theory optimal solution. So it cant even be solved for 6 max.
This is actually not correct as players in a poker game are acting individually and thus it is a non-cooperative game.
As mentioned above, Snowie ranges aren't GTO because Snowie isn't a solver.
In general, there is some misunderstanding in the OP about what GTO means.
GTO doesn't mean unbeatable. It is just the equilibrium solution for two players trying to maximally exploit each other.
It is definitely possible to get out of line against an "optimal" solution to try to exploit it: if your opponent doesn't adjust to your strategy, you can profit. The catch is that the magnitude of counter-exploitation is much bigger than exploitation. So if your opponent catches on to your strategy (say, you're playing too loose against what you perceive to be a tight range of the Villain), then he can hurt you back very quickly.
So whats making Snowie's 6max preflop range so commonly suggested?
I don't know about others, but I use Snowie ranges often because they're free, pretty reasonable, and pretty convenient.
As for Snowie's tightness, there's nothing wrong with playing a bit tight at the micros/low-stakes. Playing tight is the first thing new-ish players should learn. Rake also means that one should play tighter.
Not true! The definition of an optimal solution is that it cannot be exploited.
" I don't know about others, but I use Snowie ranges often because they're free, pretty reasonable, and pretty convenient. "
Gotcha, appreciate the response.
" As for Snowie's tightness, there's nothing wrong with playing a bit tight at the micros/low-stakes. "
Are we sure about this? This used to be the case in the past. But in the past players were MEGA loose at low stakes. But they've tightened up significantly. Online poker is tough as nails today...even the micro stakes. a .01-.02 cent game is almost certainly tougher to beat then a live $1-2 game.
I think what hes saying Samu is if I change my range from a range they are playing optimal (GTO) against, (ex. loosening it up drastically) unless the the opposition adjusts correctly and quickly*, their previous/(still current) optimal strategy will no longer be optimal, and can be exploited
Samu Patronen Yes, you're correct. I was imprecise in my comment.
Yeah, definitions matter here. Optimal solution againts non-optimal strategy is an exploitative strategy, whereas an equilibrium solution cannot be exploited.
What belrio said is actually true if optimal = best possible strategy againts any other strategy than the perfect, unexploitative strategy.
So a Nash equlibrium (GTO strategy) does exist for 6-max NLHE. However the devil is in the detail. A nash equilibrium is defined as a situation where no single player can improve their EV by unilaterally changing their strategy. If two players do it, however all bets are off – the key word in the definition is unilaterally).
I believe it is also possible for one players strategy change to reassign EV between two other players (at no benefit or even a cost to themselves). An example from the mathematics of poker is where there are 3 players X, Y and Z. If I recall correctly X increases their bluffing frequency (say costing X -$2.5) above optimal which costs Y some money as well (say -$5) who can’t realise their equity effectively because of the risk of overcall/raise from Z (who gains the $7.50 the other two lose) even if X bluffs too much. However if Z calls a little bit less than max exploit then X gains from that. So X and Z acting together can effectively divide up the EV cost to Y by cooperating.
Snowie is very definitely not Nash. It is based on computer simulation searching to find good strategies but not ones that are guaranteed to converge to Nash equlibria. My own take is that the Snowie ranges are ok but no better. People like them because they are free and are not a bad place to start (and are probably better than what an average person can come up with by themselves). That take is based on having done a fair amount of work with Monker which can solve 6max but does so using a bucketed version of the game tree.
Where did you find the conclusion that a Nash Equilibrium exists for a 6-handed game?
Nash's Existence Theorem
Nash proved that if we allow mixed strategies, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium.
Nash equilibria need not exist if the set of choices is infinite and noncompact. An example is a game where two players simultaneously name a natural number and the player naming the larger number wins. However, a Nash equilibrium exists if the set of choices is compact with continuous payoff.[14] An example, in which the equilibrium is a mixture of continuously many pure strategies, is a game where two players simultaneously pick a real number between 0 and 1 (inclusive) and player one's winnings (paid by the second player) equal the square root of the distance between the two numbers.
From wikipidea. But I've read papers where it is discussed - just haven't got it to hand right now.
Poker is a zero sum game with finite players. You may claim that cts bet sizes screw it up but we can't bet contiuously anyway!
More technical
http://people.csail.mit.edu/costis/6853fa2011/lec3.pdf
If you include rake the game is non zero sum and so the analysis doesn't go through but you can get correlated equlibria I believe. In some sends in a negative symmetric sum game it is kind of obvious that the best strategy is not to play! But I think you can do meaningful stuff with that too. if you look at the University of Alberta computer poker siye there is a thesis from 2014 which discusses this.
Actually the mit link is probably not helpful. Just rereading some stuff there are also questions on the difficulty of computing and efficacy of strats (recent work).
https://www.mdpi.com/2073-4336/9/2/33/pdf
This discusses the CFR algo which I believe is what monker uses which produces strong strategies but they are not sure they are guaranteed to be Nash. Doing this a bit on the fly but hopefully these links can get you started.
This was the thesis I originally looked at which discusses multiplayer algos but is from 2014
https://poker.cs.ualberta.ca/publications/gibson.phd.pdf
Yeah I read that article also but if I remember correctly the proof is only valid for non-cooperative games so I was just curious.
Problem I have is that I cant find an good explanation if no-limit holdem is a none-cooperative game or not.
https://en.wikipedia.org/wiki/Non-cooperativegametheory
In game theory, a non-cooperative game is a game with competition between individual players, as opposed to cooperative games, and in which alliances can only operate if self-enforcing (e.g. through credible threats).
The key distinguishing feature is the absence of external authority to establish rules enforcing cooperative behavior. In the absence of external authority (such as contract law), players cannot group into coalitions and must compete independently
I actually think the analysis does still work in the case of non zero sum. Will look into it to try to check.
Yeah when you put it that way players in a poker game (if not rigged) are completely independent and will optimize their own game and as such it is a con-cooperative game. Hence your correct there is a GTO solution for no-limit-holdem also for more then two players.
Thx :)
I think why it doesn’t work for cooperative is illustrated by my example above. In piker there is no way to enforce the alliance between X and Z even though it can occur sometimes.
To be really pedantic a Nash equilibrium :)
" However the devil is in the detail. A nash equilibrium is defined as a situation where no single player can improve their EV by unilaterally changing their strategy. If two players do it, however all bets are off – the key word in the definition is unilaterally)."
I dont see why the third party being cooperative in POKER makes any difference, maybe for other games that are more scientific and infinite. But thats not the case with poker which can be entirely math based and limited.
Can you simplify to me why if we know opponents strategy (or ranges), while also knowing they are going to play a gto strategy...why there is easily not a gto solution against them no matter how many opponents there are so long as they dont adjust (open themselves up to being exploited while also opening us up to exploitation if we dont adjust).........?
Whether or not there is 3 or more makes absolute no sense to me what so ever.
If three players players A, B, & C play rock paper scissors for money, using a GTO strategy..
And so A, B, & C equally randomize what they throw so not to be exploited bc every other player will also be throwing equally randomly.
then each player will win 33% of the time while being paid 2:1 on their wins thus breaking even for a GTO based strategy.
you could take this example and apply it to many many multiple opponents/& or poker as well. As long as you know every persons range and that theyre playing a GT.
I dont see whats making it even questionable a gto strategy exist for 6max,9 max, etc
In rock-paper-scissors when two players cooperate they can beat the one player by just playing randomly two different choices (so the two players never play rock-rock, scissor-scissor nor paper-paper).
So yes unilaterally is a key word but unilaterally also means that players can't cooperate. So I questioned that there was a GTO solution in 6Max/9Max or better said questioned if players could cooperate in poker. But I guess the answer is that players can't cooperate in 6Max/9Max poker (if the game is not rigged) so there is a GTO solution.
Yeah the cooperation can’t be enforced and is unstable. So in my example player X and Z can effectively work together by Z not calling enough but nothing stops Z from taking all the money for themselves once X has made the bet.
I don’t have my mathematics of poker to hand at the moment as it is in storage so can’t give you the detailed example. I might try to construct one if this doesn’t Convince you. It is not a scientific or purely mathematical point. The way to think about it is that in the 3 handed situation X bluffing too much costs X and Y money even if Y knows what X is doing because even though Y would like to call more Z is behind them and so can’t widen his range that much. If Z then lets more of X bluffs go through than he should when Y folds X gets some EV back. Y can take some EV from X but by calling a lot more but screws themselves over because of Z behind them
An additional broader point. If you are shown a counter example (or an example) in a simpler form than the game itself it is extremely likely that that perverse example will show up in the bigger more complicated game. The onus switches to be on you to show the more complicated game doesn’t contain that kind of example (which is often impossible) so that you don’t need to worry about it. Admittedly my example above does not do that as I didn’t provide even a 0,1 example of ranges that would have this set up. But if I did the assumption is not oh this is mathematical and won’t apply in a real game - it’s given this game is simpler it probably does apply in the real game unless I can think of a good reason why or at the very least I have no reason to expect this not to happen in the main game and so should worry about it.
Maybe misdirected my last comments. On phone. Apologies
" It is not a scientific or purely mathematical point. "
Seems like Y could still find a gto solution by tweaking his range and or postflop strategy to counter x for bluffing too much, as well as for Z when Y calls wider simply using math/nash including still not being exploited preflop....
& too the point that neither could proftiably counter against Y.
I do appreciate the example though
Was this a response to the maths example I posted (with actual numbers)?
Ok, here is my attempt at a poker like game that shows this feature. Will refine and add later if wrong.
We have three players X, Y and Z. Pot=$1. X can bet pot. If X checks so must everyone else (this makes the analysis simpler). A [0, 1] game so a player wins if his number is higher.
X has 1 50% and 0 the other 50%.
Y has 0.3 100%
Z has 0.5 80% and 0.1 the rest of the time.
Nash for X is to bet 100% of the 1s and 50% of the 0s.
Y will never call here as even if Y were on their own their call would be 0 EV.
If Y calls Z will call 100% of their 0.5s as Y is getting 3:1 on their call and will win 33% of the time. These calls by Z will therefore push Y into negative EV.
Y only realise equity when X doesn't bet. The value of the game to Y is therefore
P(X checks) * P(Z has a zero) * 1 = 0.25 * 0.2 * 1 = $0.05
Now here is the crucial bit. If X bets 100% of all hands Y still can't call as he will only win the hand P(X bluff) * P(Z folds) = 0.5 * 0.2 = 0.1 of the time
EV[Y call | X bets] = 0.1 * 2 - 0.9 * 1 = $-0.07
Now however Y never gains any EV from the game it has been taken away by Xs actions!
Who gains this EV? Well given that Y was not calling anyway when X was betting it must be Z. (I think I can prove this if this is not obvious).
Now in this circumstance Z will always call as X is bluffing too much getting 2:1 on a 50% equity call. So X will lose EV as well. So Z really benefits here.
Note that this is now not a Nash equlibrium as X can unilaterally improve their EV.
But if Z actually drops their calling frequency from when X was bluffing correctly some EV can be transferred back to X (whose bluffs now have +ve value). Y still can't do anything about any of this as Z will overcall whenever they can against Y.
Y can punish both themselves (losing even more EV) and X by calling more (all to Zs benefit) but the cooperation of Z and X has taken a game where no player can unilaterally improve their EV to one where Y has lost EV.
The reason it doesn't screw up the game (on purpose but these situations come about accidentally) is that X has no way to make Z cooperate (short of cheating). If Z maximises their own EV they will call 100% disincentivising X which forces X to bluff correctly again.
Let me know if any of these steps need fleshing out with the numbers or if you have any questions. It is also possible this example is wrong as I have knocked it up in about 15 minutes so corrections welcome.
For me it is ok I am convinced.
Snowie is not "tight" by any stretch. If anything it too loose for most people, but it can handle turn and river play so it's Wide ranges work for it, ie K6o,,,,,I would not go as wide as Snowie in a serious game. If you look at it's ranges it is difficult for all but a few to play profitably because of how wide it goes. I don't see how anyone could say it was tight? Imo it's perfectly as wide as it is able to manuver with. And most people should tightin them.
I wish someone (maybe they have) would do a toy game pio vs Snowie and see what happens, as pio says chk alot when Snowie says bet the same spot. I wonder if it bets past the catch point,orrrr just what would happen. I'm sure someone at Snowie has set there and entered or made a program that could war them against each other. It would be interesting.
Be the first to add a comment