Keith is a true stat-head. Aside from Drive-By, he is the chief analyst at numberFire, a slick and smart fantasy-oriented site. He graduated Magna Cum Laude from Northwestern with a B.A. in math. Keith has worked with two NBA franchises - Oklahoma City Thunder and Philadelphia 76ers - doing statistical analysis and data management. In addition, he has worked with ESPN and the Wall Street Journal, contributing analysis across multiple platforms, and his sports analytics research has been featured at conferences across the nation.
Jack is a recent graduate of the University of Wisconsin-Madison with a bachelor's degree in mathematics and economics. Along with Marc, he is a colleague of Advanced NFL Stats's own Carson Cistulli at FanGraphs and also writes about baseball at Disciples of Uecker (a Milwaukee Brewers blog) among other places, and his work has also been featured on ESPN and ESPN Insider. His love for the Wisconsin Badgers has instigated a foray into the world of football analysis as well, with a blog named Badger of Honor. His introduction to game theory as part of his college education opened whole new doors for analyzing the game of football, adding a completely new dimension to watching the game besides simply tallying fantasy points.
Zach writes about baseball for FanGraphs (Is anyone noticing a trend here?), and he has contributed to countless other baseball sites on the interwebs. Zach's past football work could be found on KFFL.com in late-2008 and early-2009, but he suggests not searching for them unless you wish to be disappointed. He is forced to watch the Seahawks tank this year in an effort to draft Andrew Luck, and he remembers where he was when Bill Levy ruined his life forever. You can follow him on twitter @zvsanders.
- Home Archives for January 2007
When we are engaged in any endeavor, we know we shouldn't think about how much luck has to play in the outcome. It is usually counterproductive to dwell on luck because that tends to reduce our efforts towards our goal. Think about it--Why should I try so hard if it comes down to luck anyway?
In the NFL, if teams thought that way they'd probably be dead meat. But luck is a factor in all sports. Think about a very simple example game. Assume both PIT and CLE each get 12 1st downs in a game against each other. PIT's 1st downs come as 6 separate bunches of 2 consecutive 1st downs followed by a punt. CLE's 1st downs come as 2 bunches of 6 1st downs resulting in 2 TDs. CLE's remaining drives are all 3-and-outs followed by a solid punt. Each team performed equally well, but the random "bunching" of successful events gave CLE a 14-0 shutout.
The linear efficiency model I've began using early in the season has an r-squared value of almost 0.75. That means that 75% of the variance in the outcome (season wins) can be explained by the model's variables. But by including additional variables such as penalty yards or special teams the r-squared only marginally increases and they are largely insignificant. Those factors are fairly random and chaotic anyway, which is one way to define luck. So we could conjecture that a measurable part of a team's win-loss record, but something less than 20% is due to luck.
So how can we determine how lucky a team is? By using the model and estimating the number of wins a team "should" have based on its stats, the number of "expected wins" is calculated. The difference between a team's actual wins and its expected wins reveals how lucky a team is.
Seattle appears to have been the luckiest team this year, winning about 4 more games than we would expect given their stats. They squeaked out 1 game over .500 in a very weak NFC-West to make the playoffs. If they were in the AFC they probably would not have made the post season at all.
Minnesota appears to be the unluckiest. This is probably due to their league-leading run defense, but they didn't have the wins to show for it.
Notice Pittsburgh's expected win number--10.24. They played well enough this year to win 10 games but, according to most analysts "collapsed" and didn't make the playoffs a year after winning the Super Bowl. But their regular season record last year was 11-5. One could make the case that the Steelers played only slightly worse than they did last year, but just got unlucky.
Jim Mora, Jr., former coach of the Falcons, or other fired coaches might have used this to save his job. "Mr. Blank, look, we actually played well enough to win 9 or 10 games and would have made the playoffs!" But somehow I don't think that would fly, no matter how sound the math.
By creating a notional league-average team, i.e. a team that had the average offensive and defensive passing and running efficiency stats, and turnover margin, I could determing the probability that any NFL team would beat the notional average opponent. I could simulate a neutral site by setting the home field advantage variable to 0.5 (instead of 1 or 0).
Then by sorting the teams by their generic win probability vs. the average team, we can create an efficiency ranking, similar to the common "power" rankings but far more objective. Here is how the efficiency rankings ended for the 2006 season:
By the way, SS is for the Seahawks and NY is for the Jets. I had to use non-standard abbreviations so that the teams sorted in the same alphabetical order in abbreviated and non-abbreviated form.
The model was never as accurate as it was in week 5. In week 6 it was 7-7, only 50% correct. My 5-year old son regularly does better than that just by picking the team with more wins (and breaks a tie with home field advantage). Over the rest of the season the model came out to be less accurate than I had hoped, based on how well it could predict (retroactively) the 2005 season. I would understand why after the season concluded.
But at the time, I kept using the model for more analysis. I modified the model somewhat to emphasize the latest 4 weeks of games more than those early in the season. I also could tell that home field advantage appeared too powerful as a predictor, so I reduced it slightly.
In the end the model was accurate only about 65% of the time.
The first week I tested the game-by-game model was week 5 of 2006. I was initially encouraged because it correctly predicted all 14 games that week. That streak of luck would not last long, however.
But keep in mind it doesn't really predict a winner, it produces probabilities. So for the BAL vs. CIN game, it might say BAL 0.60 CIN 0.40. Baltimore is favored with a 60% chance of winning. So if CIN wins, that doesn't necessarily mean that the model was wrong. But for the statistical purpose of judging the "fit" or validity of the model, we'll say that if the predicted favorite wins, the model was correct.
For week 5 the model produced the following probabilities:
If you look below at Week 2's highest win projections for AFC and NFC teams, San Diego and Chicago already began to emerge as the highest ranked teams, and they both went on to be #1 seeds in the playoffs.
Up to now, I had used linear regression to predict the total number of season wins based on efficiency stats. It was, and still is, very useful in helping to understand the importance of various phases of the game in terms of winning. But this method has its limits, notably in predicting the outcomes of individual games, and taking into account the strength of future opponents.
By using a different technique, we can use the same stats that we've established as the best measures of performance and strength of a team in the 4 primary dimensions of the sport plus turnovers.
By using a form of non-linear regression called "logit" regression, we can calculate the probability of a dichotomous outcome, i.e. that one team will beat another in individual games. The independent variables remain the same:
Off Pass Eff
Off Run Eff
Def Pass Eff
Def Run Eff
In the model there will be 2 sets of variables, those efficiency stats for Team A and those for Team B. The outcome variable would be which team won, technically speaking AWon = 1 if Team A won, and = 0 if Team B won.
The new model needs a database of games to analyze, so I prepared a data set of all the outcomes of every regular season game in 2005 with each team's corresponding efficiency stats. Each game is a "case," statistically speaking. There are 256 regular season games each year in the NFL, so I was confident that one year of data would be enough to establish significance of each variable. This also assumes that NFL football doesn't drastically change in nature from year to year, which I would come to learn is not a good assumption.
I also added home field advantage to the model. If Team A was the home team, the variable AHome = 1, otherwise it was 0.
After running the regression, I was amazed at how well the model predicted winners. For the 2005 season, it predicted 74% of all regular season games correctly. By adding variables such as penalties or sacks, the model improved only very marginally to about 75% correct. Because I had to rerun the numbers each week during the season, every additional variable I used added effort to the task without enough of a benefit to be worthwhile.
The regression results look like this:
Mean of A_Won = 0.500
Number of cases 'correctly predicted' = 380 (74.2%)
Log-likelihood = -262.404
Likelihood ratio test: Chi-square(11) = 184.974 (p-value 0.000000)
Actual 0 190 66
1 66 190
Yes, I know it's 2007 now. But back in the early weeks of the 2006 NFL season the model I developed could use each team's efficiency stats to predict the team's number of wins. After accounting for strength of opponent, it started to make sense as early as week 2. Teams like Baltimore, Chicago, Philadelphia, and San Diego looked like division winners. By week 5, 7 of the 8 division winners were correctly predicted and 2 of the 4 wildcards were predicted.
Here is how the predicted wins looked after week 4.
The only division winner the model missed was New Orleans, which was actually predicted to win the division by Week 3, but not by Week 4. Still it was close, plus, Atlanta suffered a nearly unprecedented let-down that got their coach fired.
The wildcard predictions are tougher by nature because it's not 4 teams competing for 1 spot, it's usually 6 teams competing for 2. Still, the NFC wildcards were accurately predicted as the Cowboys and Giants. In fact, the model accurately predicted the top to bottom rankings of the NFC-East. I'm not sure how many people thought the Eagles would win the East when the Giants were 6-0 and then Tony Romo caught fire in mid-season. The model predicted too many wins for the NFC-East in total, becuase it could not account for head-to-head match-ups between division members. Obviously, a division is extremely unlikely to produce 4 teams that average 12 wins. But what's important here is the order in which the teams fall within their division.
The AFC wildcard predicitions were incorrect. But remember this was from week 4. Cincinnati would have slipped in if not for a missed extra point in week 16 or a missed short field goal against the Steelers in week 17. Additionally, Denver controlled its own destiny at the end. Had they won at home in week 17 against the 49ers, they would have made the playoffs.
Again, the main purpose of the model is to understand the inner-workings of the game, not to predict outcomes. But by comparing the predictions of the model against actual outcomes, we can qualititatively verify the validity of the model. Besides, predicting winners is fun.
Week 4 was the last week I used this model. Realizing that the model could not take into account head-to-head match-ups, I switched to a game-by-game model.
If you look at the earlier post that lists various offensive and defensive stats and their correlation with season wins, you see that there are some that correlate with winning better than the stats I've used in my model. Points scored and points allowed, in particular, correlate very well with wins. Shouldn't those go in the model instead?
Dan Fouts, former quarterback and Monday Night Football analyst, can explain this better than I can. Or actually, Will Farrel playing Dan Fouts on Saturday Night Live can. "Al, my prediction is that the team that scores more points than the other team will probably be the winner tonight. Back to you, Al."
Of course, a team that scores a lot of points and allows fewer points will win often. There is no mystery there. And lots of guys who predict NFL games or try to beat the point spread use such stats, or things like "red zone points" in various models. In fact, these kind of models probably predict game outcomes well, but would be completely invalid if you really wanted to learn anything new about how the game really works.
Models that use points scored or points allowed, or variations of either, are no more analytical than Dan Fouts. We already know that the ability to score more points than another team leads to winning. Thanks, Dan. The question is: what enables some teams to score more than others?
Another type of model, one that uses the laundry list of factors that correlate with winning, faces problems of interdependence of variables. In regression models, there is one dependent (outcome) variable and several independent variables. The independent variables cannot be interdependent with the outcome and cannot be interdependent with each other. For example, if a model includes a variable that measures passing effectiveness and a variable that measures red zone effectiveness, that would be invalid. General passing effectiveness and the ability to score in the red zone are deeply interrelated, and the regression model would be unable to assign valid weights to the coefficients of passing effectiveness and red zone effectiveness, no matter how you measure either. The model might be predictive, but you wouldn't learn a thing about what really leads to winning.
There are other requirements for a linear regression model's validity, such as normal distributions of the independent variables and random distributions of the errors between a variable's linear estimation and the actual values. I would guess that 99% of the prediction models out here on the interent don't even bother with worrying about these things.
Since I had created a model of NFL team performance based only on offensive and defensive passing and running efficiency, plus turnovers, I could then predict the number of wins a team would be expected to have in a season based on the team's efficiency to date.
By using regression, the relative importance of each efficiency stat is weighed in order to best fit the data. I used team wins from the 2004 and 2005 seasons to derive the model's coefficients. The model would look something like this:
c + x*OffPassEff + y*OffRunEff + z*DefPassEff + w*DefRunEff + v*Turnovers = Team Wins
It's a straightforward linear model, where c is a constant, and v, w, x, y, and z are the relative "weights" or coefficients of the respective stats.
The reason I wanted to analyze these stats was to learn the relative weights of each efficiency stat. Is passing more important than running? Was defense more important than offense? Did turnovers matter and by how much? Plus, the explanatory power of these stats could be determined.
To cut to the chase, all 5 variables were significant at the p=.01 level or better. The coefficients are divided by their variable's standard deviation to determine the standardized coefficient, which lets us clearly see the relative importance of each variable.
Variable Std. Coeff.The r-squared number means that almost 80% of the variance in team wins is accounted for by the model's variables. Together with the significance of each variable, this tells us the model is valid. The rest of the variance would be accounteded for by factors not in the model (penalties or special teams for example) and by random luck.
r-squared = .79
Bearing in mind this is based on only 2 seasons worth of data (64 teams), we already learned a lot. Offensive passing appears to be the most important factor in producing a winning team. Defending the run appears to be the least important.
The conventional wisdom around the NFL, judging by the endless chatter of TV analysts, is that the key to winning is to be able to run the ball and play good defense. What this model suggests is just the opposite. If you want to win in the NFL, passing is more important than running, and offense is more important than defense.
Some might question why I quickly settled on the 4 efficiency statistics as those most descriptive of team performance and ability. It's a fair question.
Let's take a look at offensive passing stats and see the difference between Yds/Game and Yds/Att. In this exercise we'll expand the data set to get better results, so we're talking about the 04 and 05 NFL seasons.
Here's a look at some passing stats and their correlation with team wins:
STAT CORRELATIONCompletion percentage correlates slightly with team wins, but total pass yards (and yards/game) does not and is not signficant. Even more surprising, the number of pass attempts correlates negatively with team wins. So the more often a team passes, the less likely it is to win.
Comp Pct 0.347
Pass Yds* 0.193
Pass Attempts -0.354
We turn our attention back to the efficiency stat--Pass Yds/Attempt. It is merely Pass Yds divided Pass Attempts, right? So we have an insignificant variable divided by one with a negative impact on winning. We would expect to have a fairly meaningless result, but we don't.
STAT CORRELATIONYards per Attempt correlates with team wins positively, and it is very significant, even though other related variables do not.
Comp Pct 0.347
Pass Yds* 0.193
Pass Attempts -0.354
We would expect a slightly different case with run efficiency. We would expect total rushing yards to correlate stronger with team wins because teams that are ahead tend to "run out the clock," racking up running yards (and attempts) while preventing an interception. This is exactly the case, but using Rushing Yds/Att eliminates most of that effect.
See below for a list of metrics and their correlation with team wins. * denotes the correlation is not significant.
Pass Attempts 0.439
Pass Att/Game 0.438
Completion % -0.143*
Total Yds Allowed 0.028*
Yds/Game Allowed 0.028*
TDs Allowed -0.373
Interceptions (taken) 0.464
Sack Yds 0.407
Fumbles (taken) 0.307
Points Allowed/Game -0.701
Rush Yds Allowed -0.644
Yds/Rush Allowed -0.251
Yds/Pass Att Allowed -0.351
Points Scored/Game 0.711
Competion % 0.347
Total Pass Yds 0.193*
Pass Attempts -0.354
Ints Thrown -0.602
Net Turnovers 0.701
Fumbles Lost -0.543
Oppenents' Avg Win% -0.111*
Along with the 4 primary measures of a team's performance and ability (offensive and defensive running and passing), it appears obvious that teams that are able to gain a turnover advantage are more likely to win games.
Running the numbers confirms the importance of turnovers in winning. The correlation between team wins and net turnovers (takeaways minus giveaways) was 0.80. This is very high--and significantly higher than the next most important dimension, offensive passing. We can go deeper and study the importance of fumbles vs. interceptions or takeaways vs. giveaways, but for now we'll just use net turnovers because it's simple and easy to measure.
So it seems on its face that we have the beginning of a model to understand why teams consistently win. We've boiled down the stats to isolate the distinct performance of each phase of the game. We have:
Off Pass Yds/Att
Off Run Yds/Att
Def Pass Yds/Att
Def Run Yds/Att
The first task in understanding the game was to boil it down to its statistical foundations. What really makes one team better than another? There are 3 phases to the game: offense, defense, and special teams. Offense and defense can each be broken down into passing and running. Special teams consist of field goals, punting and kicking (and returning).
I theorized that offense and defense were particularly more important than special teams in winning. It's not that special teams don't matter in any one game, it's that they don't correlate strongly with winning over the course of a season. In the 2005 season, Field Goal Percentage correlated with wins at 0.054, which is very low and not statistically significant. Compared to the correlation between sacks and wins, which is 0.393, we see how much more sacks mean to winning than FG%.
Additionally, even if they correlated with winning, they are highly unpredictable from week to week. It's also difficult to measure the performance of a punting squad, for example. Are short punts bad if they pin the opposition inside the 10? What about FG%? It's hard to score because kickers with longer range are sent onto the field to try low-probability attempts.
So we're left with offensive running and passing, and defensive running and passing. There are many ways to measure these phases of the game but what is the best way to really measure how good a team is at each phase?
Total passing yards or total rushing yards would not be valid measurements. A team with a terrible defense is often playing from behind and will throw for large chunks of yardage in the 4th quarter "trash time." Teams with great defenses that carry leads into the 4th quarter will pound the ball on the ground, padding their total rushing yards. In each case, the "total" yards stat is a reflection on the team's defense as much as it is their offense. One might argue doesn't that count? Shouldn't these things factor in? Yes they should, but I want to isolate what really separates good teams from bad. Poor defense will factor in, but as you will see, the effect of a poor defense is isolated in the defensive stats.
The best measurement for each phase is yards per attempt. For passing, that means that a team is not rewarded for more attempts. All that matters is how many yards are gained with every drop-back. Incompletions, sacks, and interceptions count for zero. Running is simpler. There are no incomplete runs (though the Raiders try) so it is a straight average of yards per rush.
So here are the 4 primary "core" statistics for measuring how "good" a team is, and its correlation with team wins for the 03-05 seasons.
OFFENSE WIN CORRELATION
Yds Per Rush 0.415
Yds Per Pass Attempt 0.594
Yds Per Rush (given up) -0.351
Yds Per Pass Att (given up) -0.251
The 4 variables are statistically significant. Offensive stats appear to be more important than defensive, and passing appears to be more important than running.
Welcome to my page on NFL statistical analysis.
This site is where I plan to document my findings and interpretations of various statistical observations on the National Football League. This site is not intended to be widely read as much as it is to simply document my current hobby.
Over the course of the 2006 season, I began to use econometric statistical tools to understand more about the NFL. While most NFL sites, including this one, gravitate toward predicting winners, I was more interested in the internal workings of "why" things happened the way they do. Naturally, the outcomes of games are of interest, but I like to dig deeper and understand the game more than guessing winners and losers. What most interests me is learning something that can change the way the game should be played.
For example, my first question had to do with whether defense or offense was more important to winning. It began as a water-cooler topic at work, but I thought hey, instead of debating this in circles, we can get a definitive answer. I simply compared the correlation coefficient of defensive and offensive performance measurements with team wins. I don't even remember the answer (I think defense came out as more important, at least in 2005).
Throughout the 2006 season, I built econometric models of season win totals and game-by-game outcomes. I learned that a lot of things that are accepted as conventional wisdom in the NFL are not true.
I also learned there are a thousand other guys just like me out there doing the same stuff. But they had fancy websites, the vast majority of which focus on gambling--which does not interest me. So I'm throwing my hat into the ring and will post some of the interesting things I've found.