A frequent question around here is 'What's your prediction model's accuracy?' The answer is that it varies from year to year, depending on how the schedule works out, with some random chance thrown in. In some years, there just happens to be more evenly-matched games, and in others there are more mismatches.
What's more important, at least in terms of accuracy, is what's called calibration. For example, when the model says a game is a 60/40 game, I actually want the model to be "wrong" 40% of the time. I want an accurate estimation of the game odds more than favoring the eventual winner.
Fortunately, reader 'K Rich' tracked the model's performance since 2007 and sent me a thorough spreadsheet, and the chart below illustrates the model's calibration results.
For the most part, the bulk of the calibration error appears to be from sample error--there simply aren't that many games in each bin to be definitive. The calibration line zigs from one side of the optimum line to the other like we'd expect. On the other hand, there appears to be some trends. the home team is over-favored in mismatches where it is the stronger team and is under-favored in mismatches where it is the weaker team. It's possible that home field advantage may be even stronger in mismatches than the model estimates.