Team Efficiency Rankings - Week 7

I know. I know. These rankings make no sense to me either. Then again, last week the conventional favorites according to closing spreads went 5-9 while the game probabilities generated by the efficiency model went 8-6. (The NE-SEA game came up as a .50/.50 game, and I foolishly dug down to the third decimal place to make NE the favorite. Otherwise it might have been 8-5.)

The one thing the popped out at me was BAL falling to 17th. Everyone else has them as a top 5 team. At 5-1, their record certainly makes them look that way, tied for best in the AFC. But they are 3 simple game-deciding plays from being 2-4 and another 3 points from being 1-5: Tucker's squeaker of a FG vs NE, DAL's missed FG last week, and an incomplete Weeden pass into the endzone. Plus, they barely survived KC, winning 9-6 in a game in which they were mostly outplayed. Even before their recent injuries on defense, I think they've been generally overrated.

ATL is another curiosity. Despite having the best record in the NFL, they are 10th here. They have an average passing attack and they can't run the ball. Plus they have an average pass defense except for a very high interception rate which is bound the regress sharply.

NYG and GB are the top up-movers this week, thanks to beat-downs of top-ranked opponents on Sunday.

Here are your rankings for week 7. Click on the table headers to sort. The efficiency stats that comprise the inputs of the model are shown below.

1 DEN30.700.5143
2 SF10.700.5152
3 HOU20.610.49911
4 CHI60.610.48181
5 NYG140.610.55222
6 CAR100.590.501116
7 GB160.580.53109
8 MIA50.570.501415
9 DAL120.570.55813
10 ATL40.570.502212
11 STL150.560.54168
12 SEA170.560.54174
13 PHI70.540.50197
14 DET130.540.54620
15 WAS220.530.49125
16 MIN90.530.47246
17 BAL110.520.47721
18 NE80.510.49330
19 OAK230.500.541323
20 CIN180.470.471224
21 PIT240.470.511517
22 TB310.470.512019
23 NYJ190.450.522714
24 CLE250.440.502810
25 ARI210.420.52325
26 SD200.390.452918
27 IND260.380.492126
28 BUF280.380.462329
29 TEN270.350.512628
30 NO300.320.472531
31 JAC320.300.523027
32 KC290.270.443132


  • Spread The Love
  • Digg This Post
  • Tweet This Post
  • Stumble This Post
  • Submit This Post To Delicious
  • Submit This Post To Reddit
  • Submit This Post To Mixx

51 Responses to “Team Efficiency Rankings - Week 7”

  1. Dave says:

    To these untrained eyes, Carolina is garbage. I'll be curious to see how long it takes for this model to recognize it.

  2. Anonymous says:

    ditto Carolina..I mentioned it 2 weeks ago..
    But what NFC domination 12 of 15 in top 15...
    any ideas why this is so?

  3. Anonymous says:

    NE 18th? wow..the mighty have fallen!

  4. Anonymous says:

    Carolina is 1 spot ahead of GB, yet GB has both better O and D and a tougher average opponent, if I'm reading the numbers right.

  5. Eric says:

    Washington has really climbed in the last two weeks. Wild what two decent performances against teams ranked high in the model will do for efficiency ranks. A win against NY this week would catapult them further.

    It feels like Washington's INT rate should be higher but teams have passed so much against them that the rate is near the mean.

  6. James says:

    According to the rankings, the NFC North and East are the two best divisions top to bottom (0.57 and 0.56 GWP) and with the tightest spread in talent (SD of 0.03), with all eight teams in the top 15. The NFC West is almost equally strong (0.56) but is top heavy due to the Cardinals being down at #25.

    The biggest spread in talent is obviously the AFC West with the best and worst teams (SD of 0.16), while AFC North is the tightest group of teams and yet the third weakeset division overall. At least the two worst divisions have the Broncos and Texans to fly their banner.

  7. Rational Capital Management LLC says:

    Killer stats. As a financial quant, I've many times had the "look, dude, I know your opinion but this is what the numbers say" conversation. In any event, I'm wondering if you have the weekly efficiency ranking in a single spreadsheet. I'm looking to back-test a confidence pick-em strategy. Either way, thanks for the excellent analysis.

  8. Eric says:

    It's an interesting exercise to take the efficiency rank at face value and then apply conventional wisdom potential to guess where a team could end up.

    Take Chicago. 1st rank D passes the gut test and conventional wisdom would keep them ranked that high. 18 O-rank is reasonable, but conventional water cooler wisdom may say that the O has a better chance of improving than slipping. Ergo, that team will stay good and maybe get better.

    Conversely, efficiency ranks CAR and MIA middle of the pack in both O and D. But is there a looming to-10 upside on either side of the ball? Conventional water cooler wisdom says no, and therefore the gut test says both teams have a better chance of slipping than climbing or even hanging on to their ranks.

    That's how I look at these anyway. Rather than challenge the rank based on past performance, I ask "Will they remain?"

  9. Anonymous says:

    Anon #3 makes a good point. How or why is Carolina ahead of GB when all three inputs are higher? What stats aren't we seeing in the top table that make up that difference?

  10. Tim says:

    Penalty rate. Green Bay has a much higher penalty rate than Carolina.

  11. James says:

    Brian, I just realized that OPass is (Passing Yards - Sack Yards)/(Attempts). Shouldn't the number of sacks be included in the denominator? Is it really more predictive this way?

    It doesn't make a very big difference in the rankings but I find this decision peculiar. I always thought 'per attempt' implictly included sacks like it does with ANY/A at pfr.

  12. Monte says:

    Anons, ORANK and DRANK already have Opp GWP factored in. Really, overall rank is a function of ORANK, DRANK, and PENRATE. CAR is 4th best in PENRATE, while GB is 5th worst. That's the difference (as Tim says). Just also pointing out that Opp GWP is redundant when already looking at ORANK and DRANK.

  13. Arash Sadat says:

    Brian, I love this site and I love the weekly power rankings. Just one suggestion: can you provide a second set of rankings weighted towards recent outcomes? You linked to Michael Beouy's betting market power rankings last season and he essentially determined that the point spread for an NFL game is based solely on the teams' performance over the past five weeks. I think these power rankings would be more accurate (especially later in the season) if recent outcomes were weighted more heavily.

  14. cmckitterick says:

    I've asked this before (and don't believe I received a response): Are these team efficiency rankings taken into account for live win probability calculations?

    This might be very difficult to do, but I imagine if the Giants are down 14-0 at half against the Cleveland Browns, it's a very different story than if the Kansas City found themselves with a similar deficit. It would be cool if WP calculations reflected that difference.

  15. Misha says:

    CMC, no the WP graphs do not include weightings based on efficiency models. The WP is based on an average team.

  16. cmckitterick says:

    Thanks, that's what I suspected, but good to know for sure. Averages are probably better to go by (certainly early in the season, when GWP isn't very reliable).

    My main concern is refuting those doubters: "Those Bears with their defense should definitely have punted and trusted the defense; their O just hasn't been getting it done today."

    Spouting the win probability sacrifice associated with punting away doesn't do much to convince them if it doesn't take into account the particulars of the teams playing.

    Even with that caveat though, it's still a great tool for analyzing plays.

  17. Trent says:

    Brian, would it be possible to include a column for change in rank between this week and last week? No, the subtraction isn't too difficult for me to do mentally (fortunately), but it would be really helpful to be able to sort teams based on the change.

  18. Trent says:

    As for this week's rankings, I think its interesting to see the Jets fall after beating up on IND. I'm definitely not questioning the model, it's just one of those counterintuitive results.

  19. Anonymous says:

    I still think it would be more useful to see the actual offensive and defensive GWP and not just the ranks.

    The ranks could be put in parenthesis.

  20. Unknown says:

    I recall a past post from Brian that presented a humorous attempt at quantifying the value of coaching, but don't recall much other discussion here on it. Perhaps this has been considered, but if we posit that a portion of the unexplained variance in the model is a function of "coaching" (primarily coaching strategic decisions--not so much "teaching"), one way of looking at these rankings when comparing to W/L record is to consider which teams are outperforming their production level, and which teams are underperforming. And perhaps, it is a function of the coaching/strategic game decisions. This seems like a plausible explanation for both ATL and CAR. To the naked eye, Mike Smith tends to be more aggressive (granted--I may be guilty of some kind of bias here, but he seems to be more willing to go for it on 4th down relative to most NFL coaches), while Rivera tends to be more risk averse (relative to the probabilities).

    I don't know if this is technologically feasible given the play by play data, but I wonder if a coaching metric could be calculated based upon some kind of relationship between "optimal" decisions versus "actual" decision in the game data. So, for example, if the optimal decision for a down/distance/score/time remaining is to run/pass the football, and the team passes/runs the football instead, perhaps we assign a -WPA or -EPA to the coach based on the difference between the actual decision and the optimal decision (regardless of the actual outcome of the decision). Since coaches could only make "bad" decisions relative to the optimal decision at the given point in time, their scores could only be negative--cause the actual outcomes are irrelevant--other than impacting the optimal decision for the next play. But a lower negative might mean "better" coaching. And it may also give quantifiable and meaningful information (-EPA or -WPA) on how much less than optimal decisions are actually costing teams in terms of WP and/or EP. Presumably, this measurement could be done for every play. Cause every play would create a new "game" and a new optimal decision. And then we could maybe also test to see if this value impacts GWP.

    It seems likely there is a component of the statistics that embeds "coaching", and thus presumably there is a risk of double-counting. But it would be interesting to see if their was any relationship between "proper" coaching decisions and winning and losing.

  21. James says:

    Arash, Brian has looked into weighting recent games more but found that it was no more predictive, and so decided to weight all games equally.

  22. Michael Beuoy says:

    Arash - Just a clarification: I found that the point spread for a given game was based on the past 5 weeks' point spreads, not actual game outcomes. Vegas point spreads are far less noisy than actual game outcomes, so I have the luxury of only looking at recent data.

    For Brian's rankings, the week to week noise level most likely drowns out the much smaller impact that recency may have. I think Brian has commented on this in the past and indicated that weighting recent games more heavily doesn't improve the prediction.

  23. Anonymous says:

    If you'd prefer a method of evaluating your model that doesn't involve going three decimal places, try multiplying the predicted probabilities of actual outcomes, taking the logarithm, and dividing by log(2) * -1 * number of games. For example, if your model says team A has a 30% chance to win their game, and team B has an 80% chance to win their game, and both win, you'd do:

    log(.3*.8) / (log(2)*-1*2) = 1.03

    The nice thing about this metric is that if you say that every game is a 50/50 toss-up, you always get exactly 1. Lower numbers are better, so if your model scores under 1, it's better than a monkey.

  24. Anonymous says:

    Just ran the above numbers on the model's predictions for this week: 1.07. Not bad! But no Monkey of Achievement.

  25. Arash Sadat says:

    James and Michael - thanks for the info and clarification. I'm getting the sense that the small sample size would make it difficult -- if not impossible -- to quantify the predictive effect of recent performance, but can we make any conclusions at all on this issue? The Vegas lines seem to disproportionately reflect recent results and it would be great to know whether that's based on something quantifiable or whether it's designed to take advantage of the betting public's (presumed) recency bias.

  26. Arash Sadat says:

    And btw, Michael, I'm a huge fan of your work. Any chance we get betting market power rankings again this year?

  27. Anonymous says:

    Are you sure about yr findings?
    Can you provide a link? There is evidence contrary to your position. ex. this week No is favored over TB (even though TB stats this year are much better and coming off lopsided win) Vegas in this case by putting NO at 2.5 is weighting NO past history (last year and bettor bias) AND not their weak start..there are many other examples?

  28. Unknown says:

    As a quasi-academic, I've been interested in the price efficiency of the NFL "price" markets for a couple of years now. But time and other constraints have prevented me from doing any rigorous work on the topic. This site is outstanding and gives me a lot of ideas. Unlike a lot of academics who study the topic of price efficiency of the NFL gambling market, we have a collection of football fans who understand the game and also happen to have some stellar mathematical skills. It's incredibly thought provoking work.

    Anecdotally, I think it's difficult to make any broad generalizations about the makeup of the Vegas lines. As I'm sure everyone knows, Vegas isn't trying to set a line based on their expected outcome, but rather set a line that will inspire gamblers to roughly "split the baby" in terms of overall action on the game. In as much that gamblers have recency bias, or any other kind of bias, the lines ought to reflect that.

    In the case of New Orleans, the general dominance of the team over the past number of years appears to be overriding any concern about their otherwise inept performance for most of this season. The same is true for the opposite reason in Tampa Bay. And it's where the power of these type of models can potentially spot some opportunities and inefficiencies. If the models recognize significant changes in expected performance before the gambling public, that might be evidence of an inefficiency that can be exploited. Additionally, if the models recognize strengths (or weaknesses) that are not reflected in public perception of the teams, there may also exist inefficiencies to exploit.

    Since the probabilities are based on win or lose (and not ATS), the only tests we could do at the moment as to the superiority of the model over the gambling public would be to test the model's accuracy of calling winners in cases where the team with a greater than 50% chance of winning a given game is an underdog. Assuming the Pro-Football Forecaster has been updated with the most recent data, teams that are underdogs this week but the model says have a > 50% chance to win are St. Louis over GB, Carolina over Dallas, Tampa over New Orleans, and Cincy over Pittsburgh. It is perhaps not surprising at all that the 3 of the 4 favorite in those games (GB, Dallas, and Pittsburgh) are among the most heavily bet teams. The model would suggest the market has an inefficiency there--particularly as it relates to this week's games.

    It would indeed be interesting to calculate probabilities ATS in order to be more precise with potential inefficiencies. For example, are the Patriots really 10.5 points better than the Jets? According to Vegas, they are. The model says New England is likely to win, but how about covering a 10.5 spread? But I can tell from reading Brian's posts that this is not his primary interest--and presumably would require a lot of additional work. But it's a potential area to research.

  29. Anonymous says:

    There needs to be a FAQ for some of these questions as they have been answered before but it might be hard to find where they have been answered.

  30. Michael Beuoy says:

    Arash - I actually started my own site earlier this year (see Sports Market Analytics in the "Other Great Sites" sidebar).

    I've been publishing the NFL betting market rankings on a daily basis here: NFL Rankings

  31. Pat Laffaye says:

    These rankings don't make much sense and it's because of something simple: WINS aren't being considered. The point of the game is to WIN, not be the best at O & D. We are NOT early in the season anymore, so something is amiss. ATL is ranked too low. Perfect record with 3 wins in each conference and no byes so far. The 4 teams above them have a combined record of 9-13, ugh! CAR is definitely crap and with only one win they have not played any AFC teams and all their losses save NYG, are against LOWER RATED teams. Admit it, a .200 team should not be ranked #6, and furthermore a .500 team should not be best in the NFL!

  32. Anonymous says:

    It seems like maybe there needs to additionally be a weighting component to how often a team does a certain thing. The Falcons have passed twice as much as they've run the ball so i don't think the formula is doing them justice. Whereas the Panthers only have 10 more passes than runs.

    There is just no way I believe that ATL has the 22nd best offense in the league.

    Even by this sites own expected pts per play ATL is near first. Also ATL is first in pass SR% and CAR is near the bottom so CAR's YPA seems like its been too boom and bust.

  33. Anonymous says:


    1. The biggest fallacy regarding bookmakers is that they are trying to 'balance the action'. A recent paper highlights this effectively (I will post the link).
    Their goal is to maximize their profits because they can pick the better team more accurately then most bettors.
    They shade lines accordingly and very effectively. They do it as far as they can and do leave themselves open slightly to a few sharp bettors who can profit.
    This is a key difference between sports markets and finacial markets.

    I found is Levitt

    "..If bookmakers are not only better at
    predicting game outcomes, but also proficient at predicting bettors preferences, they can do even
    better in expectation than to simply collect the commission. ""

  34. Anonymous says:


    perhaps you would care to study 'objective' studies further
    to understand advanced sports forcasting.

    Pro-football Reference has an excellent study on the effect of 'randomness' on standing results.
    google 10,000 seasons and PRF to find it.

    Also, Brian has posted an excellent study on how close to 50% of outcome (in effect wins) are 'luck' based.

    The game simply is not designed to exclusively produce or reward the better team. All teams that win must be good & lucky! However, most sports fans rush to attribute luck to the underdog but not notice it when the 'better' team gets it.

  35. Pat Laffaye says:

    I get what you are saying and I've read the articles on luck - I think it was 42%, not quite a coin flip. Bottom line this model does not factor in 1) WINS 2) POINTS or 3) LUCK.

    I'm not knocking it because it is the programmer's choice to determine what's important and goes into the algo, but I'm just highlighting a perceived RANKING weakness that IS DIFFERENT FROM MOST MODELS that use 1 and/or 2. My take is you need strong fundamentals which is primarily a solid ranking order and I don't see it here.

    I understand what's needed to write a good predictive algos and have produced a couple myself, which have been tracked for some time now at various 3rd party sites as RWP and XWP. I post my NFL & CFB rankings for both systems at

  36. Anonymous says:

    Quasi-academic: in order for a mechanical power rating system to be spotting profitable "inefficiencies", it has to predict games better than the market. According to Burke's research, his predictions have roughly matched those of the betting market in terms of avg. W/L accuracy. This is not easy to do and the ratings are certainly above average, given the inputs, but it does not mean that the ratings can beat the market.

  37. Michael Charles says:

    I'm glad I found these team efficiency rankings. I really need to study this for my future betting plans.

  38. Jared Doom says:

    FYI, I'm in a pick 'em league with a bunch of analytic superstars (actuaries), and have used Brian's game winning probabilities (with some judgment adjustments, which have helped very slightly relative to the model alone) for 2 years and 6 weeks. I have consistently been in the bottom quartile of the league. I know the GWP model is better than a monkey, and I think Brian determined it was at least as good as Vegas at one point.

    So apparently I am working with NFL forecasting geniuses? Anyone else in this situation?

  39. Jared Doom says:

    FYI, I'm in a pick 'em league with a bunch of analytic superstars (actuaries), and have used Brian's game winning probabilities (with some judgment adjustments, which have helped very slightly relative to the model alone) for 2 years and 6 weeks. I have consistently been in the bottom quartile of the league. I know the GWP model is better than a monkey, and I think Brian determined it was at least as good as Vegas at one point.

    So apparently I am working with NFL forecasting geniuses? Anyone else in this situation?

  40. James says:

    Pat said: "I'm not knocking it because it is the programmer's choice to determine what's important and goes into the algo, ... My take is you need strong fundamentals which is primarily a solid ranking order and I don't see it here."

    The whole point of Brian's system was to remove human bias and instead only model based on fundamentals and their importance in winning. There are plenty of articles on this site describing the model's basics and accuracy, and others detailing how wins and points scored are NOT good predictors of future success. If anything, including past wins and points scored includes MORE luck, which is not a repeatable team skill, and thus hurts the overall accuracy.

    In short, wins and points scored are NOT fundamentals. NYPA and oppNYPA are.

  41. Pat Laffaye says:

    James, you clearly don't understand what I meant. My point is a solid ranking system is fundamental to a highly predictive system. I'm not suggesting to follow consensus, but team order needs to make some sense!

    Explain to me why a 1-4 team is #6 when they can't win games in their own conference and three of their losses are against lower rated teams!! Could it be SKILL? Don't tell me it's LUCK, because the model doesn't use it. But we assume LUCK could be derived from wins and/or points.

    Brian, I decided to check out how predictive the model was for weeks 5 and 6. Based on the NYT website which considers HFA unlike ANS, your model went 15-13. The closing line went 13-15 last 2 weeks and is at 56% YTD. Be curious to see how you did last year, since that was the first complete year since the model change.

  42. Mike says:

    Jared - I think you're a victim of bad timing. 2010 was not a good year for the model. I think 2011 fared a bit better.

    2009 was a great year though (I won the season prize and three out of 17 weeks in my pick'em pool using Brian's probabilities).

  43. Unknown says:

    Anon 2,

    Agreed. I understand the "balance the books" is a bit of an urban legend, and that sports books are likely to be shaded to one side. But the principle remains. They have to set a line that is reasonably close to betting expectations. If this were not the case, we would not observe situations where a line moves 3 or 4 points in the course of a week--indicating the books are getting "out of balance" by a larger degree than they are comfortable with on a risk/reward basis. While that kind of movement is not a common phenomenon, it does happen (and not just because of injuries). So while they may be shaded one side or another based on their expectations, I still think the dominant weighting is biased towards "market" expectations. But I'm more than happy to retract that statement if there is an "insider" with more intimate knowledge of Vegas practice.

    Anon 4 - "in order for a mechanical power rating system to be spotting profitable "inefficiencies", it has to predict games better than the market. According to Burke's research, his predictions have roughly matched those of the betting market in terms of avg. W/L accuracy."

    Yes and no. Yes, the prediction model has to predict games better than the market--but it doesn't have to be EVERY game. My understanding is the W/L accuracy comparison was based on Brian's model versus Vegas favorites. But if a model is particularly good at identifying certain types of matchups that the general market misprices, there can still be inefficiency. This, imho, is one of the flaws in some of the academic research on the topic (which I concede I have not done an exhaustive review). Tests on "home dog" strategies, and other "one size fits all" have shown the market to generally be efficient--particularly when the vig is considered. But these tests are too general, imho. I think looking for specific "value plays" (if you will) is where the opportunity may exist.

    A crude example might be a situation where the implied W/L probabilities based on the lines in Vegas deviate from the model expectations by > 20% (or whatever). If the model is indeed robust (as a post from Brian in the past seems to indicate), there may be an advantage there. Because in my brief review of this season's outcomes, there have been a number of situations where there has been substantial deviation from the implied probabilities in Vegas and the probabilities from Brian's model. And not only in cases where Brian's model said the W/L probability was > 50%, and Vegas had that team in as a dog. For example, Brian's model in the Packers/Colts game, as memory serves, had a 43% probability of a Colts win. Based on the money line in Vegas, the implied probability was around 20%--a fairly significant deviation. Similar deviation can be found in a handful of games just about every week. So while Brian's model overall, may be about as accurate as Vegas favorites, if his model remains robust regardless of the deviation from Vegas probabilities, that would be an inefficiency that could be exploited--simply by betting all games where deviation from the model was significant enough to justify an "investment".

    What's that deviation? I have no idea--and not enough data to make any conclusions. But it's a thought. And it's still an inefficiency even if the overall performance of the model matches Vegas favorites.


  44. James says:

    Pat, I don't know what you mean when you say: "Explain to me why a 1-4 team is #6 when they can't win games in their own conference and three of their losses are against lower rated teams!! Could it be SKILL? Don't tell me it's LUCK, because the model doesn't use it. But we assume LUCK could be derived from wins and/or points."

    Why can't the reason be bad luck? They've lost all three close games they've played, by a total of 12 points. They were one extraordinarily ill timed fumble from beating the Falcons.

    If you want to know why they are ranked #6 it's mostly because they have the third most efficient passing game in the NFL, the single most important predictor of future success, and the 5th best penalty rate. The data and the model are all explained for you on this site, so I don't know what you don't understand.

  45. Jared Doom says:

    "Mike says:
    Jared - I think you're a victim of bad timing. 2010 was not a good year for the model. I think 2011 fared a bit better.

    2009 was a great year though (I won the season prize and three out of 17 weeks in my pick'em pool using Brian's probabilities)."

    I actually believe the pickers I play against are just that good. Consistently the same story over the 2010, 2011, and the first few weeks now of the 2012 season. There's about 40 people in the pool, the best I've finished in a week is 2nd.

  46. Anonymous says:


    I think the advanced team stats are more telling than Carolinas raw yards per attempt in this case and thus Carolina is being overweighted by the formula.

    BY ESPN's advanced QBR rating cam Newtown is 26th
    BY DVOA Cam Newton is below average at-7.5%
    BY pro football focus video scouting method Newton is 25th
    BY this sites own advanced stats CAR is close to average in
    pass EPA/play and near last in pass success rate.

    By the eye test more of their big plays have come when trailing and their passing has been very inconsistent.

    I also think the formula needs to have some sort of weighting for the number of each type of play. Say I had a theoretical team that passed 1 time for 50 yards. And ran 100 times with ana average sucess rate. They would appear godly by this formula because of their 50ypa.

    Also I think the opposite problem is dragging ATL's offensive ranking down.They pass way more than they run and
    their passing is better than their raw Net YPA would suggest.

  47. Anonymous says:

    Continued from above I guess it was strange to see Brian write that ATL "has an average passing attack" when they are second in expected points per play and 1st in pass success rate.

  48. Ropke says:

    I'm sure this is answered SOMEWHERE on this site, but I can't find it. I see in the Efficiency Rankings, GB was ahead of STL. But at the Fifth Down page STL was the favorite. Was this entirely because of homefield advantage?

  49. Anonymous says:

    Ropke, yes.

  50. Anonymous says:

    Yeah I'm starting to get a little suspicious of these efficiency ratings. Maybe I'm just not fully understanding them. I know the WPA formula is based on actual data, right? So in situation X, historically (since 2000), teams have won Y percent of the time. It's not a guess, it's actual historical data. So I trust WPA.

    My concerns stem from how efficiency ratings are determined. It just seems like too much weight is given to yards per attempt without any weight given to how much those yards actually help. It doesn't matter if you throw an 80 yard TD pass if there's one minute left and you're down by 3 TDs.

    Looking at the stats on this site, Carolina is ranked:
    19th in offensive EPA
    19th in offensive WPA
    23rd in offensive success rate
    19th in passing EPA
    22nd in passing WPA
    23rd in passing success rate

    Interestingly enough, they're 14th in run EPA, WPA, and success rate, even though ANS has always claimed that running is the least important component of an offense.

    So I'm struggling to see how AYPA is such an important component. If a team is playing from way, way behind and the winning team's defense is giving them moderate gains, of course they'll improve their AYPA. But it's not actually helping them win.

  51. evo34 says:


    I agree with what you say, but the problem is that what you say is all theoretical. Yes, his model *could* have an edge vs. Vegas that only show up in certain subsets of games. But until someone actually demonstrates that it does (no one has), it should be assumed not have an edge.

Leave a Reply