## Does Defense Really Win Championships?

Posted by Neil Paine on August 27, 2010

Here are some quick logistic regressions I ran between offensive/defensive efficiency (as measured by my 1951-2010 estimation equation) and whether or not a team won a championship...

The first regression is between regular-season offensive/defensive rating (relative to the league average) and championships won since 1951, the first year for which I can estimate possessions. The logistic equation to predict championship probability from RS efficiencies was:

p(C) ~ 1 / (1 + EXP(4.7267572 - (0.3988116 * Offense) + (0.612137 * Defense)))

From this equation, we would expect an average team during the Regular Season (0.0 on offense & defense) to have a 0.9% chance of winning an NBA title. If you increase offense to the following levels while keeping defense average, you see this pattern:

Offense | Defense | p(C) |
---|---|---|

0.0 | 0.0 | 0.9% |

1.0 | 0.0 | 1.3% |

2.0 | 0.0 | 1.9% |

3.0 | 0.0 | 2.8% |

4.0 | 0.0 | 4.2% |

5.0 | 0.0 | 6.1% |

6.0 | 0.0 | 8.8% |

7.0 | 0.0 | 12.6% |

8.0 | 0.0 | 17.7% |

9.0 | 0.0 | 24.3% |

10.0 | 0.0 | 32.3% |

Conversely, if you make the defense better while keeping the offense average, here are the expected championship probabilities:

Offense | Defense | p(C) |
---|---|---|

0.0 | 0.0 | 0.9% |

0.0 | -1.0 | 1.6% |

0.0 | -2.0 | 2.9% |

0.0 | -3.0 | 5.3% |

0.0 | -4.0 | 9.3% |

0.0 | -5.0 | 15.9% |

0.0 | -6.0 | 25.8% |

0.0 | -7.0 | 39.1% |

0.0 | -8.0 | 54.2% |

0.0 | -9.0 | 68.6% |

0.0 | -10.0 | 80.1% |

As you can see, this result seems to bear out the old adage that "Defense Wins Championships"; for instance, to have the exact same title odds as a team with an average offense and a defense that was 5.0 pts/100 poss better than average, an average defensive team would have to score 7.7 more pts/100 poss than average!

However, we're taking into account all of NBA history here (well, except for 1950), and in case you forgot, there was this ridiculous defensive dynasty in the late '50s & 1960s that probably skews the findings heavily toward defense... Check out the Celtics' yearly offensive/defensive efficiency splits relative to the league average during their 11 championship seasons:

Year | Team | W | L | Champs | Off | Def |
---|---|---|---|---|---|---|

1957 | Boston Celtics | 44 | 28 | 1 | 0.27 | -4.26 |

1958 | Boston Celtics | 49 | 23 | 0 | 0.35 | -4.10 |

1959 | Boston Celtics | 52 | 20 | 1 | 0.69 | -4.37 |

1960 | Boston Celtics | 59 | 16 | 1 | 1.41 | -4.77 |

1961 | Boston Celtics | 57 | 22 | 1 | -2.14 | -6.37 |

1962 | Boston Celtics | 60 | 20 | 1 | 0.31 | -6.85 |

1963 | Boston Celtics | 58 | 22 | 1 | -1.75 | -7.42 |

1964 | Boston Celtics | 59 | 21 | 1 | -2.90 | -9.27 |

1965 | Boston Celtics | 62 | 18 | 1 | -1.11 | -7.98 |

1966 | Boston Celtics | 54 | 26 | 1 | -1.54 | -5.57 |

1967 | Boston Celtics | 60 | 21 | 0 | 2.49 | -4.14 |

1968 | Boston Celtics | 54 | 28 | 1 | -0.48 | -3.87 |

1969 | Boston Celtics | 48 | 34 | 1 | -0.74 | -5.52 |

*( Note: Remember, positive is good for offenses, but negative is good for defenses.)*

Boston's dominance during that stretch could possibly be inflating our sense of whether defensive teams are more likely to win because: A) Their defense was so historically outstanding in the Bill Russell era; and B) Their offense was so mediocre (they were below-average in 7 of the 13 years listed above!). To avoid the possibility of this skewing our sample, let's re-run the regression using only results since the 1976 ABA-NBA merger (and using the traditional possessions formula rather than the historical estimation):

p(C) ~ 1 / (1 + EXP(5.5573404 - (0.5306148 * Offense) + (0.6129486 * Defense)))

Once again, here are the tables for average defenses (left) and offenses (right):

Offense | Defense | p(C) | Offense | Defense | p(C) | |
---|---|---|---|---|---|---|

0.0 | 0.0 | 0.4% | 0.0 | 0.0 | 0.4% | |

1.0 | 0.0 | 0.7% | 0.0 | -1.0 | 0.7% | |

2.0 | 0.0 | 1.1% | 0.0 | -2.0 | 1.3% | |

3.0 | 0.0 | 1.9% | 0.0 | -3.0 | 2.4% | |

4.0 | 0.0 | 3.1% | 0.0 | -4.0 | 4.3% | |

5.0 | 0.0 | 5.2% | 0.0 | -5.0 | 7.6% | |

6.0 | 0.0 | 8.5% | 0.0 | -6.0 | 13.2% | |

7.0 | 0.0 | 13.7% | 0.0 | -7.0 | 22.0% | |

8.0 | 0.0 | 21.2% | 0.0 | -8.0 | 34.2% | |

9.0 | 0.0 | 31.4% | 0.0 | -9.0 | 49.0% | |

10.0 | 0.0 | 43.8% | 0.0 | -10.0 | 63.9% |

These findings still bear out the axiom of defense winning championships, but the split between offense & defense is much smaller than it had been when we included pre-merger seasons. Going back to our earlier example, to have the same p(C) as a team with an average offense and a -5.0 defense, an average defensive team would have to score only 5.8 more pts/100 poss than average if we use this equation.

However, the continued prominence of defense even when we drop the heavily D-oriented Celtics dynasty from the sample does suggest that, all things being equal, teams should prioritize excellence at that end of the court if they want to win a championship.

August 27th, 2010 at 12:41 pm

I will note that the spread (stdev) on offense is usually greater than defense, both at a team level and at an individual level. It seems that offense is simply more variable in that regard; defense is more about having effort and teamwork than individual skill.

August 27th, 2010 at 12:57 pm

Neil, what inputs do you use for "offense" and "defense" exactly?

I looked at your other post, where you came up with your formula, and it looks like you're only including points for the offensive stat. Is this equation accounting for assists, steals, or blocks?

August 27th, 2010 at 1:16 pm

Re: #1 - That's true, I can buy that. I found that defense was more highly correlated with winning championships, but it could be that it's easier to find a competent defense than a competent offense. Maybe I could run this again with z-scores to account for the differences in distribution.

Re: #2 - This study is at the team level, so points per possession is all that matters. But if you're referring to the equation for estimating possessions, the variables in that formula were the only significant ones when it came to predicting a team's total number of possessions. Also, including steals and blocks as predictors would have been silly, since the whole point is to come up with an estimate we can use going back to 1951.

August 27th, 2010 at 1:44 pm

it's interesting to see the numbers on this. it's not surprising really. from a purely logical standpoint, a team control (to some extent) effort exerted, concentration level, and attention to deal to a higher degree than whether or not the ball actually goes through the twine.

as the finals showed us, some games you can execute your offense perfectly and get wide open jumpers for a great shooter like Ray Allen and still come up empty on offense.

i wonder if the offensive variance between the winners and runners up is bigger than the defensive? it seems as though pretty frequently the team with players who are good at manufacturing scoring opportunities against great defense (even if they aren't shooting especially well for a series) tend to win.

August 27th, 2010 at 4:31 pm

What I would love to see is; is there a difference between teams that have had great individual defenders as opposed to great team defense?

In a broad sense; imagine two teams, both champions, both 'defense' focused teams. However, one team has players that have defensive ratings that span a much broader range - a team that has one or two lockdown guys and a few decent players, more of a spread - I'm thinking the Bowen-Duncan Spurs here. Or is it more about a team that has a broader spread of defensive talent - 04 Pistons maybe.

Or perhaps this is a flawed thesis - maybe great defensive teams require a overall high level of defensive play rather than a few standouts.

August 27th, 2010 at 5:06 pm

I would say it's tough to find a truly great defensive team without truly great individual defenders. '08 Celts had KG and Rondo (and Pierce and Posey and Perkins). '03, '05, '07 Spurs had Timmy and Bruce. '04 Pistons had Ben & Prince. '96 Bulls had MJ, Scottie, and Worm. '94 Knicks had Ewing & Oakley.

On the other hand I'm sure there have been plenty of great individual defenders who didn't play on overall great defensive teams. Hakeem's Rockets comes to mind from one of Neil's posts a few days ago.

August 27th, 2010 at 6:48 pm

What were the worst defensive teams to win a title post-merger? What were the worst offensive teams to win post merger (I feel like possibly '04 Detroit?).

What is the average league rank on offense and defense of all championship teams (can separate by merger too).

I'd be curious.

August 28th, 2010 at 12:37 am

What type of change is that? A weak offensive team that was good on defense won many championships, so you'll throw them out?

Just throw out all the good defensive teams that won championships, and I'm sure you'll get a different result.

August 28th, 2010 at 3:43 am

Neil, the 60's Celtics were a prolific offensive team. 120 PPG? That's "Showtime". 80's Lakers prolific. Those guys were runners. Cousy. Heinsohn. Hondo. I'd estimate they had scored between 20-40 fast break points per game, in the films I've seen. As a measure, modern teams struggle to score between 10-15 fast break points. Obviously they slowed, in the late 60's, much like how "Showtime" became "Slowtime" around '87. What is the point of perpetuating the myth that those teams were dominant defensive teams when they were scoring 120 PPG for 3/4th a decade?

August 28th, 2010 at 10:12 am

This may not affect your results, but you also need to control for changes in the number of teams over time through expansion. As the number of teams has increased the baseline probability of winning a championship has fallen.

August 28th, 2010 at 11:08 am

Neil, I wasn't really happy with your answer but that could be because I don't understand it. That's fine. I'd much rather use statistics that we can trace each and every year. Since I wanted to, I decided I'd take data since the 2004-05 regular season and see what I came up with. Mind you, I don't know how to calculate percentages (odds of winning a title), but I did come up with a way to use FGM, FGA, TRB, AST, STL, BLK, TOV, and PTS.

For defensive rankings, I compared team and opponents based on field goals missed (the difference), TRB-diff, STL-diff, BLK-diff, and TOV-diff.

For offense, I compared teams and opponents based on FGM-diff, AST-diff, and PTS-diff.

After doing this for the past six seasons, here's what I came up with.

Champions:

Def rank on avg: 2.333 (best ranking 1st (three times); worst ranking 6th (once))

Off rank on avg: 3.833 (best 1st (once); worst 10 (once))

Runner-ups:

Def rank: 4.667 (best 2nd (once); worst 9th (once))

Off rank: 4.833 (best 3rd (once); worst 8th (once))

I know this is a very limited sample, but it seems to me there's a lot to be said about including assists, steals, blocks and turnovers rather than relying on rebounds, points, and estimated possessions.

In 2005 and 2008, two of the top three defensive teams made the Finals.

In 2006, both of the top two defensive teams made the Finals.

In 2009, two of the top four teams made the Finals.

In 2007 and 2010, there seemed to be an exception with one of the top five in the Finals in '07 and two of the top seven in '10.

As for offense, in '05, '06, '08, and '09, two of the top five offensive teams made the Finals.

In '07, one of the top five and, in '10, one of the top three.

August 28th, 2010 at 3:38 pm

Vincent, good point about controlling for the # of teams in the league.

MikeN, the reason you look at an alternate sample where one historical outlier team hasn't dominated so much is that if you include them, you run the risk of the regression telling you more about the correlation between the 1950s/60s Celtics and winning championships, rather than the correlation between having a great defense and winning championships.

Matt, it doesn't matter how many PPG they scored, it matters how many points they scored per possession. And these were not good offensive basketball teams, they just played at a monstrous pace. Since we re-established several posts ago that pace and offensive/defensive efficiency are independent of each other, it's foolhardy to judge a team's offense on PPG, because it obscures their efficiency by also including an element of the coach's choice (pace), not the team's actual ability to put the ball in the basket. If you still don't believe these were mediocre-to-bad offensive teams, check out where they rank in FG% (another stat found to be independent of pace) each year:

From 1961-69, they only ranked in the top half of the league in FG% once (1967). They were either last or second-to-last in the NBA 5 times. Those Celtics won because of their overwhelmingly dominant defense --

nottheir offense, which was actually below-average for the majority of that 11 titles-in-13 years run.August 28th, 2010 at 4:01 pm

Joseph, I don't really understand what you're doing there... Differentials between a team's stats and its opponents' are neither offensive nor defensive, but they combine elements of both O and D. For instance, when you look at the differential between assists and opponent assists (which I assume is one of the metrics you ran), you're not only measuring how well the team distributes the ball, but also how well they prevent the opponent from doing so, making it not strictly an offensive stat.

Not to mention the fact that, at the team level, all of those component stats (FGM, FGA, TRB, AST, STL, BLK, TOV) are merely pieces of the puzzle that go into determining offensive and defensive efficiency. We use points scored/allowed per possession as the basic efficiency metrics for a team because they explain 94% of wins, and are pace-neutral (again, not influenced by the team's choice of tempo). Any combination of rebounds, steals, assists, etc. would be at a level down from points/possession, since those are merely components that tell

howa team arrived at its efficiency level. A great way to drill down to the level under points per possession is to look at the four factors:http://www.basketball-reference.com/about/factors.html

And again, those were determined by looking at their correlations with efficiency. If you want to look at basketball analysis as a pyramid, we have team winning % at the top... Below it are luck and pythagorean winning % as the determinants that make up WPct... Below pythagorean %, we have offensive efficiency and defensive efficiency, as measured in points per possession... Finally, below those we have the Four Factors at the team level, and stats like ORtg/%Poss/DRtg/SPM at the individual level to explain how the team arrived at those efficiencies.

August 28th, 2010 at 4:15 pm

Also re: Joseph, I realized I wasn't exactly clear about the inputs I used for "offense" and "defense"...

Here's how they worked: For each team, we know how many points they scored and allowed. We can estimate how many possessions they used by employing one of two equations -- either the standard post-1974 version that you see throughout the site, or the historical estimator I found back in June. The post-'74 version is obviously going to be much more accurate, but either way, you're going to come up with an estimate for the number of possessions the team used.

Then you divide both points scored and points allowed by the possession total and multiply by 100 to arrive at Offensive and Defensive Ratings, or points scored/allowed per 100 possessions.

The next step is to determine the league's average points per possession, so you add up the points and possessions for every team in the league and calculate 100*(pts/poss).

The final step is to subtract the league average efficiency from the team's offensive and defensive ratings, which gives you the number of pts/100 poss they scored/allowed above average. On defense, negative numbers are good because you are subtracting the league average from the team's defensive rating, and ideally you want the average to be bigger than your team's DRtg.

To summarize:

Offense = (100 * (Pts / Possessions)) - (100 * (LgPts / Lg. Poss))

Defense = (100 * (Opponent Pts / Possessions)) - (100 * (LgPts / Lg. Poss))

August 28th, 2010 at 7:01 pm

Great clinic, Neil. If this blogging thing doesn't take off, you should be a teacher.

I know this particular study isn't looking for it, but I wonder if certain styles of great defense lead to more winning than others. I recall seeing somewhere (very possibly here or at Pelton's site) that teams that play great defense without fouling (like San Antonio at its best) win up with a big edge.

Also I recall reading somewhere (and this may have actually been Berri), scoring off of steals is typically much more efficient than otherwise - so do teams whose defense causes a lot of turnovers (like Chicago's best defensive seasons) tend to win more?

August 28th, 2010 at 7:46 pm

Hey Neil,

Wondering if you can follow this up and answer these two more specific questions based on your post:

(1) Do defensive oriented teams do better in the regular season, in general?

(2) Are defensive oriented teams more likely to pull upsets in the post-season?

Thanks!

August 28th, 2010 at 7:55 pm

Thanks, Neil.

I guess it was the wrong assumption to think that if a team has more assists than its opponents, it has a better offense. Or that having more steals means better defense. Or...less turnovers means they caused more turnovers. That's what I was trying to get at. I realize they're inclusive, and that it's hard to argue with 94 percent accuracy. I'm not knocking your site or your methods, I wouldn't be here all the time if I didn't enjoy them, appreciate them and try to use them.

I will tell you this, though, whoever ranked higher on my "better defense" list won every Finals for the past six years (ranking better against the Finals opponent).

August 29th, 2010 at 7:24 am

To gain some further insight, it might be good to also run a regression based on the team's defensive and offensive efficiency "rank" that year. That way you don't have to worry about the symantecs of the numbers.

For instance, -10 points on defense is a lot harder to do than +10 points on offense. The percentages are also different.

80 -> 70 is a 14.2% change.

80 -> 90 is a 11.1% change.

Also there's the fact that blow-outs are still just 1 win (I think strong offense teams have more blowouts than strong defensive teams, but who knows.) But just throwing the idea out there. Love the work you've done so far, very interesting.

August 29th, 2010 at 4:16 pm

When you compare the value of offense vs defense, you compare the increase in championship probability as you change offensive or defensive efficiency by a certain number of absolute points. However, it would perhaps be more informative to see how these changes in the points per possession correspond to the z-scores for offensive and defensive efficiency.

For example, let's assume that it is easier to increase a team's offensive efficiency than it is to increase the team's defensive efficiency. Let's say that for the same amount of effort, you could increase your offensive efficiency by 10 points or increase your defensive efficiency by -5 points. Clearly, a team would want to spend the effort on improving offense if this were the case. Examining the z-scores gives some idea of how difficult it is to reach +10 offense or -10 defense by judging how many teams have been able to reach those levels.

In other words, if you compare teams that are one or two standard deviations better than the average offense to teams that are one or two standard deviations better than the average defense, do your conclusions still hold?

August 30th, 2010 at 5:57 pm

I think some of this finding may be a result of mathematical artifacts introduced by measuring efficiency by total points added or subtracted per 100 possessions. Jax hints at this above. It make more sense to think of offensive defensive quality as proportional factor; that is to say, a team scores or allows X% more or less per possession.

You can see how this makes a difference when you consider a matchup between Team A with an Off Eff of 110 and Def Eff of 100 vs Team B with Off Eff of 100 and Def Eff of 90. If you just add and subtract the efficiencies, you'd expect a dead-even matchup with average numbers being put up on both ends of the floor. But if you render the efficiencies as percentages and multiply them against each other to see what effect each team will have on the other, you get this:

A on Defense vs B on Offense -> 100% * 100% = 100%

A on Offense vs B on Defense -> 110% * 90% = 99%

So this matchup actually favors Team B by a point per 100 possessions...which is worth, what, 2.7 wins over the course of an 82-game season? That's a pretty huge difference. This suggests a mechanism by which defense is more important than offense...but only if you assume that it's just as easy to get 10% better at defense as it is to get 10% better at offense. As others here have pointed out, there's no reason to suppose that. Z-scores may get around the problem.

August 31st, 2010 at 7:50 pm

In my opinion, the most straightforward explanation is one in which the efficiency's variance (on a per-game basis) is correlated with the variance. That is not entirely unexpected; the more you score (or allow), the more you would expect that amount to vary. You might expect the coefficient of variance to remain the same, but not the variance itself.

If variance somehow remained constant (relative to efficiency), that wouldn't prove something was wrong statistically, but it would push the statistical explanation to some difference between the post-season and the regular season. While we all know that the two are different, we'd have to identify something specific that was different between the two that explained the anomaly.

I'll add something that I mentioned to Henry Abbott: Ultimately, the anomaly Neil uncovered is statistical. It simply

hasto be addressed statisticallyfirst. We can then explain it further in basketball terms, but to attempt to address it in basketball terms first, without identifying the exact mechanics of the statistical anomaly, is putting the cart before the horse.August 31st, 2010 at 7:59 pm

Oops, I meant that the most straightforward explanation is one in which the efficiency's variance is correlated with the

efficiency. Not the variance again.I'd like to be able to put up a visualization of all this, but suppose you have two bell curves. One is centered on the team's offensive efficiency, and the other is centered on the team's defensive efficiency. For any given game, you select a point from the offense bell curve and a point from the defense bell curve, and the team wins if the former exceeds the latter; otherwise, the team loses.

Increasing offensive efficiency means sliding the offense bell curve rightward, and decreasing (that is, improving) defensive efficiency means sliding the defense bell curve leftward. With that simple model in mind, it sure seems as though the two improvements leave the same

relativelyarrangements of the bell curves (which is all that matters when it comes to scoring margin, and therefore to wins and losses). However, if as you slide one curve rightward it also gets broader, and if you slide the other curve leftward it also gets narrower, then the situation is no longer symmetric. There's greater overlap in one case than in the other, and likely a change in the winning probability. Enough, perhaps, to explain the disparity that Neil uncovered.September 1st, 2010 at 1:57 am

These are all good points -- the shape of the distribution and how narrow or wide it is (i.e., the game-to-game standard deviation of efficiency) is definitely important in determining how valuable defensive efficiency is relative to offense. Brian, a lot of what you are talking about applies to what Dean Oliver called the "correlated gaussian method":

http://www.rawbw.com/~deano/helpscrn/corrgauss.html

http://www.rawbw.com/~deano/articles/BellCurve.html

Which I think is exactly what you meant when you mentioned the key to winning being to land on a point further to the right on the offensive efficiency curve than the defensive efficiency curve. And standard deviation is a big part of that: all else being equal, between two teams with the same >.500 pyth% but different amounts of game-to-game variance in their efficiency levels, the team with less variance will win more games.

So it's absolutely true that if offense or defense is more consistent on a game-to-game basis, it could change the conclusions being drawn here. It's also true that if the league's standard deviation in defense is different from in offense, it could change the conclusions. Ideally, I think the metric we'd want to re-run the study with would be the z-scores of an SOS-adjusted, HCA-adjusted correlated gaussian WPct. And even then you'd have to deal with the fact that, as Vincent says in #10, there is inherently a difference in p(C) between an identical Celtics team that wins a title in an 8-team league and one that wins in a 30-team league.

September 15th, 2010 at 1:56 pm

I know I'm a little late to the game here, but just wanted to offer a tip. Looking at the coefficiencts for your second regression, they are 0.5306148 for offense and 0.6129486 for defense. You then write that "to have the same p(C) as a team with an average offense and a -5.0 defense, an average defensive team would have to score only 5.8 more pts/100 poss than average if we use this equation."

5.8 / 5.0 = 1.16

0.6129486 / 0.5306148 =~ 1.1552 (about 1.16, if we round).

To my knowledge, this is not a coincidence.

August 24th, 2011 at 9:39 am

We're a group of volunteers and starting a new scheme in our community. Your website offered us with valuable information to work on. You've done a formidable job and our whole community will be thankful to you.

November 10th, 2011 at 7:47 pm

Wonderful post. Many people are posting a lot of content on Facebook and other social media sites, but they donâ€™t know how to connect and how to stand out from the crowd. Your website content has stimulated me to stand out from the crowd. Tickled to be very a miniature fragment of this website .. by commenting here! Cheers... again, for putting this together and for making all of our lives easier!