The “Andrew Luck Stanford Football Concern” is tiresome because no one has attempted to move beyond vague generalities to quantify the impact of losing Andrew Luck. Here’s a look at trying to quantify it specifically by calculating the effect of past top-10 quarterbacks on their respective team’s offenses, dating back to the 2001 NFL Draft.
I’ve started with quarterbacks playing in the 2000 college football season and have excluded non-BCS-conference QBs from the sample, and have calculated the numbers with a pretty straightforward method: the average points per game with the top-ten QB, followed by the average points per game in the year after. (Full data here.)
There are 14 different teams in the sample (counting, obviously, different iterations of the same college team as separate). Many people say that since whoever replaces Andrew Luck must be worse than he is, the team must also regress offensively. If history is any guide, this isn’t clear. Four of our 14-team sample actually increased their points-per-game average. The median in the sample is a 17.5 percent decline; applied to Stanford football this would result in 35.64 points per game. If the defense’s points conceded is held equal, the fourteen points-per-game margin would be consistent with a top 20 finish (conservatively).
The worst-case scenario, historically speaking, was Ole Miss the season after Eli Manning’s departure, which lost 47.1 percent of its points production. I don’t think this is a particularly instructive example in many ways. Manning was one of the high points of what was clearly a relatively mediocre team. Yes, it went 10-3 in his final year with a top-fifteen finish, but squeaking by Vanderbilt and losing to Memphis is hardly the mark of a good team. Eli Manning is what people imagine when they imagine Stanford, but Stanford has much more talent, and starts from a better base.
For many reasons, no precedent seems like a satisfying comparison. The upstart programs on the list don’t quite have as much talent returning as Stanford, not to mention the quality recruiting; and the difference between Stanford and the football oligarchy that comprises the rest of the list is still large. The team is still in a middle stage, which makes it pretty strange to project.
The perceptive critic will note that Stanford lost more talent than most of the schools in the sample and should create a smaller subset of teams that lost a ton of talent. Much obliged: let’s reduce the list to teams that lost multiple first-round picks on offense (of which one is a top-10 QB). This doesn’t help the critic’s case: two of the five cases gained points per game; the worst-case scenario was USC 2006’s epic draft class (the one with Leinart, Bush, et. al.); the median is a 8.25 percent decline. Scoring 39.6 points per game sounds quite good, doesn’t it?
To be fair, a couple of examples in this smaller dataset had significant injuries in the season before a draft exodus. Exclude them and you get a robust three-team sample…of which the median decline is 12.7 percent, with 37.7 points per game as a result when applied to Stanford.
The three-team sample size, while pitifully small, underlines the volatility in making a projection. LSU from 2006 to 2007 picked up nearly 15 percent increase in points production and won the national championship behind Matt Flynn, who had a highly mediocre senior season (below 7 yards per attempt.) All this despite losing three first-round offensive players.
The uncertain edge comes if you think the team emulates the transition from USC 2005 to 2006. In this case, the team loses 38 perce t of point production—and sags to 27 points per game or so. If the team concedes around 20 points per game, a 7-5 or 8-4 season seems likely.
But, historically speaking, this appears to be an extreme scenario. At least as far as the NFL draft is concerned, Stanford lost much less talent than that USC team did. And it’s also clear that a substantial upside scenario is conceivable—one that ends with a third straight BCS bowl appearance.