Comments on: BBR Rankings: Schedule-Adjusted Offensive and Defensive Ratings (December 10, 2010) NBA & ABA Basketball Statistics & History Mon, 21 Nov 2011 20:56:04 +0000 hourly 1 By: yariv Mon, 13 Dec 2010 00:10:20 +0000 I'm not sure it would provide significant results either, but thought it might be interesting to see. One more remark on the subject, if you will tackle it. The number of dimension to use is quite arbitrary (2? 3? 5?) but I thought of some approach to establishing a reasonable number. You could run prediction tests (on last year data, for example) dividing to training and test data. Get minimum on training data, and see how well (in terms of average squared error) it predicts the test data. I would expect improvement with increase of dimension, but I speculate this improvement will be drastic as long as there are actual "aspects" of the game that will correlate with the new dimension, and will be quite small afterwards. Based on intuition alone, of course, so I wouldn't trust this idea, but if it's simple to test you could try and see what the graph looks like.

I would be delighted if you'll email me with anything regarding this idea, by the way. (I assume you can get access to the required email address).

By: Neil Paine Sun, 12 Dec 2010 16:29:20 +0000 Ah, now I see. So you'd basically be trying to tease out different components of a team's offensive rating based on how their performance changes against certain "types" of opponents (not knowing what the types are beforehand, but expecting them to emerge automatically when you run the formula).

I'd have to play around with the execution (and I'm not totally sure it would produce significant results), but that's worth taking a look at some point.

By: yariv Sat, 11 Dec 2010 22:05:57 +0000 Thanks Neil, I'll explain what I mean by "modes".

I came to think, regarding this general approach, that having offence/defence as a single number is somewhat problematic. Take OKC for example, they had a good defensive team last year, but with a real problem of defence in the paint, and had more trouble against teams with good offensive C/PF. A one dimensional approach to offence/defence will never reflect anything like that (one dimensional - a single number for the offence/defence). So I tried to think how can you add more dimensions. In your formula, you can't add more parameters because they'll all have exactly the same effect, and you won't get anything. But by putting stuff as multipliers, you can. As you did it (one dimensional) it's almost identical (multiplier-1 times 100 is additive value), because it's only second order deference, but you can add more dimension and they won't be identical.

So, you could do the same with a formula like (LgAvg1+HCA1)*HTOS1/ATDS1+(LgAvg2+HCA2)*HTOS2/ATDS2, and the numbers might diverge from 1 by more than they are now, as they will have additional meaning, except for the actual efficiency of the team (which is very similar across the league, of course), they could be related to the team's preferences.

It's completely hypothetical at the moment, of course, but I hope that by looking at teams with high type 1 offence and contrast them with teams with high type 2 offence, we could find some meaning to those "types", and I find it interesting because these types will come directly from the formula, not from our perceptions of basketball, which I tend not to trust, as long as they are not supported by numbers. (Too many ideas that make a lot of sense and are quite accepted by the public, like clutch or "hot hand" seems not to exist...)

Was I clear? I could try to explain again.

By: Neil Paine Sat, 11 Dec 2010 16:46:32 +0000 OK, I ran that through the same data I ran the BBR rankings on (so, not including last night's action):

team_id Offense Defense
ATL 1.021 1.001
BOS 1.025 1.059
CHA 0.981 0.999
CHI 0.979 1.036
CLE 0.955 0.961
DAL 1.015 1.051
DEN 1.021 1.003
DET 0.968 0.963
GSW 0.985 0.965
HOU 1.021 0.975
IND 0.989 1.036
LAC 0.979 0.973
LAL 1.052 1.003
MEM 0.989 0.993
MIA 1.041 1.051
MIL 0.944 1.046
MIN 0.972 0.969
NJN 0.970 0.977
NOH 0.987 1.053
NYK 1.030 0.965
OKC 1.027 0.978
ORL 0.999 1.052
PHI 0.993 1.001
PHO 1.054 0.946
POR 0.997 1.002
SAC 0.951 0.972
SAS 1.052 1.032
TOR 1.002 0.972
UTA 1.029 1.016
WAS 0.970 0.950
lgavg 53.671
hca 53.671

(You probably shouldn't split the lg constant and HCA out, because you're just applying the multiplier to their sum -- which happens to be the lg ORtg. It doesn't matter, though; half the avg ORtg multiplied by two obviously equals the avg Ortg.)

Average squared prediction error per game was 190.2; the BBR Rankings' avg squared error was 185.1. Still, it's an interesting concept, and definitely a different approach than the SRS.

By: Neil Paine Sat, 11 Dec 2010 16:41:16 +0000 Re: #6 - Interesting. That would produce values where 1 was average, and instead of adding the ratings to the league avg (as in SRS), the strength ratings would essentially be multipliers that you apply to the league avg rating. I'm not sure what you mean by "modes" and "preferences" of offense, though.

By: DSMok1 Sat, 11 Dec 2010 16:19:15 +0000 @ #7, Jared Ras

That's actually quite common for Phoenix: they run a high-risk, fast paced offense that scores them a bunch of point but also concedes a lot of turnovers and scores for the other team.

Milwaukee was similar last year, in reverse.

By: Jared Ras Sat, 11 Dec 2010 03:41:55 +0000 I pointed this out a couple weeks ago when Milwaukee was 30th Offense and 1st Defense, but now in reverse, Phoenix is 1st Offense and 30th Defense. That seems quite odd to have two teams do that (one each way) in the same year.

By: yariv Fri, 10 Dec 2010 23:09:00 +0000 Neil, I have to wonder about something. Is it possible to run the minimal squared error on a formula like:
Home OffRtg = (LgAvg + HCA)*HomeTeamOffensiveStrength/AwayTeamDefStrength
That is, will the minimization algorithm support this formula?

The reason I'm asking, if you use such formula you could sum a few such terms ((LgAvg1+HCA1)*HTOS1/ATDS1 + (LgAvg2+HCA2)*HTOS2/ATDS2), and such numbers might not be very close to 1. That is, it seems plausible that HTOSs in such formulas will "represent" different types of offence, and their value will be not only how well the team run specific types of offensive modes, but it's preferences. I thought it might be interesting to see such results, especially since you allow the distinction between "modes" to arise automatically, not making any assumption on what they might be (except their number). It might be pointless, but I don't know how to check.

What do you think?

By: dsong Fri, 10 Dec 2010 18:45:41 +0000 Lakers are too high... it's not a good sign when they're life-and-death to beat a couple of garbage teams who are among the worst in the league and are a combined 0-22 on the road.

Hopefully Bynum's return will change things.

By: Neil Paine Fri, 10 Dec 2010 18:06:05 +0000 You should read Doug's original post on SRS:

Basically, the ratings start with a team's efficiency differential. Then they adjust for strength of schedule by adding or subtracting based on how much above or below average their opponents' ratings were. If you're a +2 eff. diff. team facing a +2 schedule, your rating would be +4; if you're at +2 e.d. and faced a -2 schedule, your rating is 0.

The real trick is that the SOS is dependent on the ratings and the ratings are dependent on SOS, so it has to essentially run through many iterations before converging on the final set of ratings.

Anyway, to answer your question, your efficiency diff. gets credited for playing a tough schedule and debited for playing an easy schedule.