I would be delighted if you'll email me with anything regarding this idea, by the way. (I assume you can get access to the required email address).

]]>I'd have to play around with the execution (and I'm not totally sure it would produce significant results), but that's worth taking a look at some point.

]]>I came to think, regarding this general approach, that having offence/defence as a single number is somewhat problematic. Take OKC for example, they had a good defensive team last year, but with a real problem of defence in the paint, and had more trouble against teams with good offensive C/PF. A one dimensional approach to offence/defence will never reflect anything like that (one dimensional - a single number for the offence/defence). So I tried to think how can you add more dimensions. In your formula, you can't add more parameters because they'll all have exactly the same effect, and you won't get anything. But by putting stuff as multipliers, you can. As you did it (one dimensional) it's almost identical (multiplier-1 times 100 is additive value), because it's only second order deference, but you can add more dimension and they won't be identical.

So, you could do the same with a formula like (LgAvg1+HCA1)*HTOS1/ATDS1+(LgAvg2+HCA2)*HTOS2/ATDS2, and the numbers might diverge from 1 by more than they are now, as they will have additional meaning, except for the actual efficiency of the team (which is very similar across the league, of course), they could be related to the team's preferences.

It's completely hypothetical at the moment, of course, but I hope that by looking at teams with high type 1 offence and contrast them with teams with high type 2 offence, we could find some meaning to those "types", and I find it interesting because these types will come directly from the formula, not from our perceptions of basketball, which I tend not to trust, as long as they are not supported by numbers. (Too many ideas that make a lot of sense and are quite accepted by the public, like clutch or "hot hand" seems not to exist...)

Was I clear? I could try to explain again.

]]>team_id | Offense | Defense |
---|---|---|

ATL | 1.021 | 1.001 |

BOS | 1.025 | 1.059 |

CHA | 0.981 | 0.999 |

CHI | 0.979 | 1.036 |

CLE | 0.955 | 0.961 |

DAL | 1.015 | 1.051 |

DEN | 1.021 | 1.003 |

DET | 0.968 | 0.963 |

GSW | 0.985 | 0.965 |

HOU | 1.021 | 0.975 |

IND | 0.989 | 1.036 |

LAC | 0.979 | 0.973 |

LAL | 1.052 | 1.003 |

MEM | 0.989 | 0.993 |

MIA | 1.041 | 1.051 |

MIL | 0.944 | 1.046 |

MIN | 0.972 | 0.969 |

NJN | 0.970 | 0.977 |

NOH | 0.987 | 1.053 |

NYK | 1.030 | 0.965 |

OKC | 1.027 | 0.978 |

ORL | 0.999 | 1.052 |

PHI | 0.993 | 1.001 |

PHO | 1.054 | 0.946 |

POR | 0.997 | 1.002 |

SAC | 0.951 | 0.972 |

SAS | 1.052 | 1.032 |

TOR | 1.002 | 0.972 |

UTA | 1.029 | 1.016 |

WAS | 0.970 | 0.950 |

lgavg | 53.671 | |

hca | 53.671 |

*(You probably shouldn't split the lg constant and HCA out, because you're just applying the multiplier to their sum -- which happens to be the lg ORtg. It doesn't matter, though; half the avg ORtg multiplied by two obviously equals the avg Ortg.)*

Average squared prediction error per game was 190.2; the BBR Rankings' avg squared error was 185.1. Still, it's an interesting concept, and definitely a different approach than the SRS.

]]>That's actually quite common for Phoenix: they run a high-risk, fast paced offense that scores them a bunch of point but also concedes a lot of turnovers and scores for the other team.

Milwaukee was similar last year, in reverse.

]]>Home OffRtg = (LgAvg + HCA)*HomeTeamOffensiveStrength/AwayTeamDefStrength

That is, will the minimization algorithm support this formula?

The reason I'm asking, if you use such formula you could sum a few such terms ((LgAvg1+HCA1)*HTOS1/ATDS1 + (LgAvg2+HCA2)*HTOS2/ATDS2), and such numbers might not be very close to 1. That is, it seems plausible that HTOSs in such formulas will "represent" different types of offence, and their value will be not only how well the team run specific types of offensive modes, but it's preferences. I thought it might be interesting to see such results, especially since you allow the distinction between "modes" to arise automatically, not making any assumption on what they might be (except their number). It might be pointless, but I don't know how to check.

What do you think?

]]>Hopefully Bynum's return will change things.

]]>http://www.pro-football-reference.com/blog/?p=37

Basically, the ratings start with a team's efficiency differential. Then they adjust for strength of schedule by adding or subtracting based on how much above or below average their opponents' ratings were. If you're a +2 eff. diff. team facing a +2 schedule, your rating would be +4; if you're at +2 e.d. and faced a -2 schedule, your rating is 0.

The real trick is that the SOS is dependent on the ratings and the ratings are dependent on SOS, so it has to essentially run through many iterations before converging on the final set of ratings.

Anyway, to answer your question, your efficiency diff. gets credited for playing a tough schedule and debited for playing an easy schedule.

]]>