5.8 / 5.0 = 1.16

0.6129486 / 0.5306148 =~ 1.1552 (about 1.16, if we round).

To my knowledge, this is not a coincidence.

]]>http://www.rawbw.com/~deano/helpscrn/corrgauss.html

http://www.rawbw.com/~deano/articles/BellCurve.html

Which I think is exactly what you meant when you mentioned the key to winning being to land on a point further to the right on the offensive efficiency curve than the defensive efficiency curve. And standard deviation is a big part of that: all else being equal, between two teams with the same >.500 pyth% but different amounts of game-to-game variance in their efficiency levels, the team with less variance will win more games.

So it's absolutely true that if offense or defense is more consistent on a game-to-game basis, it could change the conclusions being drawn here. It's also true that if the league's standard deviation in defense is different from in offense, it could change the conclusions. Ideally, I think the metric we'd want to re-run the study with would be the z-scores of an SOS-adjusted, HCA-adjusted correlated gaussian WPct. And even then you'd have to deal with the fact that, as Vincent says in #10, there is inherently a difference in p(C) between an identical Celtics team that wins a title in an 8-team league and one that wins in a 30-team league.

]]>I'd like to be able to put up a visualization of all this, but suppose you have two bell curves. One is centered on the team's offensive efficiency, and the other is centered on the team's defensive efficiency. For any given game, you select a point from the offense bell curve and a point from the defense bell curve, and the team wins if the former exceeds the latter; otherwise, the team loses.

Increasing offensive efficiency means sliding the offense bell curve rightward, and decreasing (that is, improving) defensive efficiency means sliding the defense bell curve leftward. With that simple model in mind, it sure seems as though the two improvements leave the same *relatively* arrangements of the bell curves (which is all that matters when it comes to scoring margin, and therefore to wins and losses). However, if as you slide one curve rightward it also gets broader, and if you slide the other curve leftward it also gets narrower, then the situation is no longer symmetric. There's greater overlap in one case than in the other, and likely a change in the winning probability. Enough, perhaps, to explain the disparity that Neil uncovered.

If variance somehow remained constant (relative to efficiency), that wouldn't prove something was wrong statistically, but it would push the statistical explanation to some difference between the post-season and the regular season. While we all know that the two are different, we'd have to identify something specific that was different between the two that explained the anomaly.

I'll add something that I mentioned to Henry Abbott: Ultimately, the anomaly Neil uncovered is statistical. It simply *has* to be addressed statistically *first*. We can then explain it further in basketball terms, but to attempt to address it in basketball terms first, without identifying the exact mechanics of the statistical anomaly, is putting the cart before the horse.

You can see how this makes a difference when you consider a matchup between Team A with an Off Eff of 110 and Def Eff of 100 vs Team B with Off Eff of 100 and Def Eff of 90. If you just add and subtract the efficiencies, you'd expect a dead-even matchup with average numbers being put up on both ends of the floor. But if you render the efficiencies as percentages and multiply them against each other to see what effect each team will have on the other, you get this:

A on Defense vs B on Offense -> 100% * 100% = 100%

A on Offense vs B on Defense -> 110% * 90% = 99%

So this matchup actually favors Team B by a point per 100 possessions...which is worth, what, 2.7 wins over the course of an 82-game season? That's a pretty huge difference. This suggests a mechanism by which defense is more important than offense...but only if you assume that it's just as easy to get 10% better at defense as it is to get 10% better at offense. As others here have pointed out, there's no reason to suppose that. Z-scores may get around the problem.

]]>For example, let's assume that it is easier to increase a team's offensive efficiency than it is to increase the team's defensive efficiency. Let's say that for the same amount of effort, you could increase your offensive efficiency by 10 points or increase your defensive efficiency by -5 points. Clearly, a team would want to spend the effort on improving offense if this were the case. Examining the z-scores gives some idea of how difficult it is to reach +10 offense or -10 defense by judging how many teams have been able to reach those levels.

In other words, if you compare teams that are one or two standard deviations better than the average offense to teams that are one or two standard deviations better than the average defense, do your conclusions still hold?

]]>For instance, -10 points on defense is a lot harder to do than +10 points on offense. The percentages are also different.

80 -> 70 is a 14.2% change.

80 -> 90 is a 11.1% change.

Also there's the fact that blow-outs are still just 1 win (I think strong offense teams have more blowouts than strong defensive teams, but who knows.) But just throwing the idea out there. Love the work you've done so far, very interesting.

]]>I guess it was the wrong assumption to think that if a team has more assists than its opponents, it has a better offense. Or that having more steals means better defense. Or...less turnovers means they caused more turnovers. That's what I was trying to get at. I realize they're inclusive, and that it's hard to argue with 94 percent accuracy. I'm not knocking your site or your methods, I wouldn't be here all the time if I didn't enjoy them, appreciate them and try to use them.

I will tell you this, though, whoever ranked higher on my "better defense" list won every Finals for the past six years (ranking better against the Finals opponent).

]]>