Maybe June should be "other sport month" here at Advanced NFL Stats. Except for Albert Haynesworth's principled, valiant stand against being forced to play defensive tackle a foot and a half to the right from where he is accustomed, there's not much going on in the NFL. Fortunately, there's plenty of other sports going on, including the world's biggest events in soccer and tennis.
Soccer and tennis offer two of the best examples of simple two-strategy zero-sum game theory. Soccer offers us the penalty kick, when a player kicks to the left or right extreme side of the goal so hard that the goalkeeper must simultaneously guess a direction to lunge. Tennis gives us the serve, where the server aims for either the extreme forehand or backhand side of his opponent's service box. Both examples give us the opportunity to examine how well experts are able to approximate the optimum strategy mix.
In any two-player game with two strategy choices, and there is no obvious "dominant" strategy that is always preferred, there is an optimum mix of strategies that will guarantee a minimum long-term payoff. This is known as the minimax solution, and in this case it's also a Nash equilibrium. If both players are playing their minimax strategy mix, neither player has any incentive to change their mix. At this equilibrium, the average payoffs for each strategy will be equal.
In cases like modern NFL football, there is clearly an imbalance in payoffs between two strategies. Passing, in most situations, has a significantly higher payoff than running. But in cases like soccer and tennis, where the payoffs are much easier for the players to measure, the actual strategy mixes differ from the theoretically optimum mixes by very small margins. Players either win the contest at hand (the kick or the point) or not. By comparison, football payoffs are very complex. They're a function of down, to-go distance, yards gained or lost, and original line of scrimmage, not to mention the possibility of a turnover.
In 1998, Mark Wooder and John Walters examined serve strategy in men's tennis matches at Wimbledon. Servers target either the extreme forehand or extreme backhand side of their opponent’s service box. Returners strategies are hard to discern. They can position themselves at any point between the extremes, either splitting the difference or guessing to one side or the other.
The payoffs were whether the server ultimately won the point or not. Although a lot can happen between the serve and the end of a point in tennis, the game can be “collapsed.” In essence, the payoff becomes the probability of winning the ensuing point. If tennis players are playing at the minimax, the average payoffs of serving to both the forehand and backhand side of the courts should be approximately equal.
Wooder and Walters (defined a “game” as an entire match divided into deuce-court serves and ad-court serves. They found that the of the 40 “games” they analysed, the vast majority were not significantly different from the minimax. Using the Pearson statistic and the Kolmogorov test (statistical methods for estimating how much a data set differs from the expected), they concluded that tennis players are able to intuitively approximate the minimax equilibrium.
In a 2003 study, economist Ignacio Palacios-Huerta studied penalty kicks from the top English, Spanish, and Italian professional soccer leagues. He found that there was a difference in strategy choices according to the dominant foot of a kicker. For example, a left-footed kicker is stronger kicking to his left, and kicks more frequently to his dominant side. Palacios-Huerta divided the sample of kicks into dominant side and non-dominant side kicks. Penalty kicks are more informative than tennis serves because the defender’s (the goalkeeper’s) strategy choice can be directly observed. He calculated the optimum equilibrium mix between left and right (for both kickers and goalies). Using similar statistical tests to Wooder and Walters, Palacios-Huerta also found that soccer players also play minimax.
In 2004, researchers Shih-Hsun Hsu, Chen-Ying Huang, and Cheng-Tao Tang studied a larger data set of tennis serves than Walker and Wooders, including men’s, women’s, and juniors’ matches. Their findings were more mixed. Although they found about the same minor deviations from the minimax strategy mixes, they used more stringent statistical tests and concluded that the players deviated from the minimax more often than Walker and Wooders’ study. Further, they found that certain strategies that followed a “rule of thumb” matched the players' actual strategies better than a pure minimax solution. They were saying that experts don’t really try to play minimax strategy mixes, but instead they follow simple rules that can approximate the theoretical minimax.
Ofer Azar and Michael Bar-Eli studied penalty kicks from the Israeli soccer league and found that players' strategies do not deviate significantly from the minimax. Their study is notable because it accounts for the Neeskens effect, a neat story in itself.
Traditionally, penalty kicks are kicked to either extreme side of the goal, forcing the goalkeeper to guess which direction to lunge to defend the goal. Then, in the 1974 World Cup final between Germany and the Netherlands, Johan Neeskens shocked the soccer world (and the opposing goalkeeper) by kicking a penalty kick straight into the middle of the net. No matter which way the goalie lunged, his kick would likely score. Within two years following Neesken’s kick, penalty kick success rates in international play increased 11%.
Azar and Bar-Eli modeled the penalty kick as a 3x3 game instead of a 2x2 game, taking into account the “center” strategy. They compare the Nash Equilibrium to other strategy options, such as “probability matching.” They conclude that the proposition that people can find the optimum solution to such a complex game is strong support of experts' ability to play at the optimum strategy mix.
There are additional soccer studies. Chiappori, Levitt (of Freakonomics fame), and Groseclose conducted a study of penalty kicks in 2002 and found that players did not deviate from the minimax significantly. (Levitt also co-authored a similar study on MLB baseball and NFL football, and found significant deviations. However, the study of such sports is drastically more complex than left/right and goal/no goal, and his study failed to capture the utilities of the outcomes properly.)
Another notable soccer study was done by GianCarlo Moschini in 2004. His study looked at goals in regular play rather than from penalty kicks. He also found support for the notion that players can play at the minimax equilibrium, but he also admits his study has “low power” to discern equilibrium play from non-equilibrium play. He suggests goalies might be shading too far toward the near side, for example.
Keep in mind what these studies are trying to find out. The question isn’t whether or not the minimax is the optimum. That is a mathematically proven truth—there’s no way around it. The question is whether or not humans can actually zero-in on the optimum, or if we’re even trying to.
Personally, I think this debate is silly. No one thinks the human brain can explicitly solve the math needed. And it should be no surprise that people might use simple rules to approximate the optimum. The mental costs involved in even attempting to derive mathematical perfection probably outweigh the added benefit above simple approximation. To know the optimum strategy mix in a simple 2x2 game, much less actually execute it, a player would first need to have perfect recall of lots of data. Then, he would need to solve for the intersection of two simultaneous linear equations. Take the typical football run-pass “game” below:
yPASS D = 4 - 7x
To find the minimax strategy mix, a player would need to solve those two equations and then find the x value of the intercept. Does anyone really expect someone to be able to do that in his head? Of course, not. But humans have an uncanny ability to estimate the answer using shortcuts. I used to do it all the time, traveling near the speed of sound.
Let’s say I’m flying in my F-18 and I see a MiG-29 I want to intercept. I want to close in on him as quickly as possible and either chase him off or gun him full of holes. I don’t know the MiG’s x or y (or z) coordinates, or his velocity or anything else. I can just see him with my eyes traveling across my canopy (windscreen). What I need to do is plot an intercept of two linear paths, a task no different in mathematical terms than solving for a mixed strategy Nash Equilibrium.
I was never very good at math under G forces, so what I’d do is maneuver to point the nose of my own aircraft out in front of the MiG’s flight path. This is known as ‘lead’ pursuit. As I close in, the MiG will either appear to me to drift forward or drift backward on my canopy. If it’s drifting forward, it means I haven’t pointed my aircraft far enough out in front of the MiG’s flight path. If it’s drifting aft, it means I'm pointed too far out in front of his nose. I can make a correction, note the how the MiG’s apparent drift changes, and re-correct until the drift stops and the MiG just appears to get bigger and bigger. At that point, I have zeroed-in on the solution to a complex set of simultaneous equations--intuitively, without any math.
In fact, the same calculations are required on the football field. A receiver maneuvering to catch a deep pass will adjust his route until the apparent drift of the ball is canceled out. A defender pursuing a ball carrier in the open field will make the same mental calculations to take the best angle.
For decades, psychologists have been puzzled by how baseball (or cricket) outfielders know where to go to catch a fly balls. Experiments have shown that they perform the same intuitive math that fighter pilots use. If you have outfielders watch a rising hit and ask him to guess where on the field it will land, they do very poorly. But if you allow the outfielder to maneuver, he’ll zero-in on the ball easily, and arrive just in time to make the catch. They perform what researchers call “optical acceleration cancellation."
You don’t have to be chasing MiG-29s or running backs or fly balls to do intuitive algebra. Chasing your runaway toddler at the playground or merging into traffic on the freeway require the same mechanics. I mention these examples just to suggest that humans are indeed capable of doing intuitive math similar to that needed to play at the minimax. Our ability, however, depends on the richness of the feedback available.
In a dogfight or on the athletic field, the feedback is immediate and evident. Real-world deviation from the minimax strategy mix can be due to a lack of robust feedback. Perhaps the deviation isn’t due to a flaw in the human intuition process but with the availability and relative complexity of the information needed to make optimum decisions.
In soccer penalty kicks, the feedback is very simple—either you score a goal or you don’t. In football, the feedback is very complex—a function of gain, down, distance, field position, turnovers etc. The simpler the feedback, the closer to the optimum we can expect to be. But when the utility function is excessively complex, people will (often wisely) fall back on tradition and convention. My point isn’t that chasing things and strategy mixes are identical problems, and the human brain can solve them in identical ways. My point is, given rich enough feedback, the human brain is capable of amazing feats, things we take for granted every day in sports and beyond.