Jump to content

User:Morgan Wick/College Football

From Wikipedia, the free encyclopedia

As you may be well aware, no sport incites as much controversy over its champion as college football, the sport with by far the most controversial championship system, not just in America, but in the whole world. That's because there IS no championship, only a makeshift compromise system that, like all compromises, leaves everyone mad.

It's what really makes college football so exciting these days - we could get a classic championship matchup like the 2006 Rose Bowl Game, or, far more likely, a huge controversy that causes heads to explode. Best of all, we don't get the same controversy twice.

Partly because of this, there is no place where the sports rating system is more popular than in college football. It attracts nerds convinced that their mathematics degree leaves them uniquely qualified to rejigger a formula to pop a "best team" out of the air, despite having no actual football experience.

And guess what? I'm no different! Only my formula is really the best! No, really!

Actually, it's ridiculously complicated (which, of course, means it's really really good), but I think you'll like it. It works in three parts:

  • A Rating: This page describes a way of determining the size of the outcome of a game other than margin of victory: "score ratio". This is defined as the margin of victory over the winning score. Note that under this system, the score ratio for a shutout is always 1, no matter how many points are scored by the winning team. Thus, this system minimizes the effect of running up the score to impress pollsters or computers where MoV is used without mitigating factors. In fact, it emphasizes defense a LOT - if you're beating a team 49-7, if the losing team picks up another touchdown, you have to get to 98 points just to get back to the score ratio you had before!

    Since score ratio goes from 1 for shutout wins to -1 to shutout losses, I refactor the numbers to go from 1 to 0, with .5 representing a tie. Score ratio for A Ratings are taken by adding the total margin of victory, with losses being negative, and adding the total number of points scored in those games. Calculate score ratio on the result and refactor as above. Finally, multiply that number by the winning percentage to get A Rating. This number is useful in its own right, but in a limited way, as it does not take into account strength of schedule. That's the job of...

  • B Rating: I think a lot of systems that factor in MoV (which includes no BCS systems) don't consider that it's not just that you beat people big, it's who you beat big. A humongous win over another big team, say Georgia, should be given more weight than the same humongous win over, say, Temple. Thus, this rating resorts to straight-up MoV, because the effect of running up the score is, in theory, mitigated this time by the fact that RUTS'ing bad opponents will not be considered as much. (If you can RUTS on a good opponent, that says something in itself.) I start this rating by calculating B Points, which is basically dividing the margin of victory by the opponent's A Rating. If it's a loss, I make the margin negative, but I also subtract the A Rating from 1, so a .250 A Rating will be divided as a .750. To factor in home field advantage, I add 1 for any road game and subtract 1 for a home game; neutral sites remain unmodified. I add all the B Points together and multiply by the A Rating.

    Note: I do not really completely trust the numbers here, because I calculate them in Microsoft Access, which can be frustrating at times. While B Points for individual games are quite low, rarely being more than a single digit to the left of the decimal point, when summed up they become implausibly high, reaching into the thousands. I don't know what's making Access do this, I can only hope that the proportions are the same. Also, for negative B Points I have to subtract the A Rating from 1 again for B Rating calculation. This preserves the integrity of the rankings but it also means they are cleaner when they're positive; thus, I usually pull out positive B Ratings for special recognition in my rankings.

  • C Rating: Not really needed, but three is a magic number, right? This forms the basis of what I do. I calculate conference ratings by averaging the B Ratings of all teams in the conference. Independents are calculated as their own one-team conferences except Army and Navy, which are counted as one conference. After a lot of rejiggling, I finally arrived at a system I like of dragging teams down to the conference level, mitigating the effect of beating teams in-conference that have bloated ratings because they're mediocre teams that beat still worse teams. I calculate the difference between the B Rating and their conference's rating, and take a fraction of that equal to the proportion of Division I-A the conference makes up. For example, a 10 team conference would have 10/(number of teams in Division I-A, 119 at this writing) of their teams' differences plucked out. That fraction is then subtracted from the B Rating to arrive at the end result.

Are you tired? Don't be. A world of fun can be had by looking at the current results by clicking here!