Posted by Neil Paine on March 31, 2009
This past week, ESPN.com's John Hollinger rolled out a new stat that compares a player's PER (Player Efficiency Rating) to that of a "replacement-level" player, in an effort to incorporate minutes played into an evaluation of the player's worth. Said JH:
"The idea behind Value Added is to take the difference between a given player's performance and that of a 'replacement level' talent -- the type of guy who might be sitting at the end of a team's bench, or perhaps in Sioux Falls -- and multiply that difference by the number of minutes that player played. The result shows, theoretically, how many number of points the player added to his team's bottom line on the season.
VA is very useful for award voting in particular, because it allows us to compare players with disparate production and minutes -- say, one player who was brilliant in 60 games and another who was merely good but played all 82 -- and figure out which performance was more productive."
This is actually just the latest in a long line of "value over replacement"-type metrics, which actually had their genesis in baseball -- in the mid-90s, Sabermetrician Keith Woolner invented VORP as a method of "correctly valuing durability and playing time versus rates of production". Since that time, it's basically become the de rigeur way to bridge the gap between counting stats and rate stats, since it essentially reflects the economic reality of pro sports (the talent pool is quite large and there is a certain level of production for which you can pay the bare minimum).
Baseball's replacement level is relatively easy to find, given that the definition is "the expected level of performance an MLB team will receive from one or more of the best available players who can be obtained with minimal expenditure of team resources to substitute for a suddenly unavailable starting player at the same position". Because baseball has very clearly-defined positions, you just look at the aggregate performances of the backups at those positions, and there's your replacement level.
But, as we often lament in the statistical community, basketball ain't baseball. There aren't clearly-defined positions, for one thing -- just about everyone can play at least 2 positions in a pinch, and some can play 3 or even 4. Also -- and perhaps consequently -- the delineation between "starter" and "backup" isn't so clear (for instance, an All-Star-caliber player like Manu Ginobili isn't technically a starter). So you can't simply look to all "non-starters" when determining the replacement level, at least not like you can do in baseball.
In addition, starters are "replaced" with a different caliber player than bench guys. If Dwyane Wade is injured, his replacement is somebody like Daequan Cook, who is much better than your typical 10-day contract/NBDL call-up; likewise, if Cook is hurt, somebody like Luther Head or Yakhouba Diawara can slide in. It's only when you get down to the Dorell Wrights of the world that you truly start to fill minutes with NBDL-caliber talent, and that's about 3 degrees of depth-chart separation from Wade himself. So it doesn't really reflect the reality of basketball to simply measure the difference between Wade's production and the production of an NBDL player in the same minutes, because he's not actually being directly replaced by the NBDL guy.
I've suggested in the past that what's sometimes called "chaining" could be the solution for this problem, but I'm not even entirely sure that method accurately reflects the reality of player replacement in the NBA. So I guess my question is, what do you think? Is the concept of "value of replacement" as valid in basketball as it is in baseball, or is the different nature of the two sports (roster-wise) a major stumbling block toward devoping "VORP" for NBA players?