I noticed that, also, when I ran some SPM numbers myself. Perhaps Rosenbaum's system was over-parametrized since he had relatively few data points. Did he run the regression over all 420 players with >250 minutes? He has results for 420 players. His notes stated that "The regression is weighted by minutes played with the 2003-04 season counting twice as much as the 2002-03 season." That adds even more error to the regression.

How exactly did you run your regression? Did you use 1-year APM's? Those are really noisy, but probably reasonable for this usage since the errors should even out. Since you have what, 6 years of data, the outliers should not over-influence the regression.

]]>That's a good point, but unfortunately we don't have play-by-play and therefore can't say who was on the floor in the highest-"leverage" moments. That said, since these are playoff games, I think it's fairly safe to assume most teams played to maximize their point differential and didn't pull starters.

*When I did the Statistical Plus/Minus for the NCAA last year, I summed to Ken Pomeroy's efficiency differential, which should be about the same as a SRS for college (right?)*

Yes. In fact, SRS doesn't explicitly take into account pace like Kenpom's adjusted efficiency differential does, so I'd say his efficiency differential is the "more correct" metric to sum to (since SPM was regressed on efficiency differential and not per-game point diff.).

]]>Could some of these results be biased because the bad teams were not playing their stars the whole time (because they were winning) while the losing teams were leaving their starters in?

When I did the Statistical Plus/Minus for the NCAA last year, I summed to Ken Pomeroy's efficiency differential, which should be about the same as a SRS for college (right?)

]]>