Jump to content
SiouxSports.com Forum

It was important in the spring


Recommended Posts

Posted

The Massey ratings. SDSU and NDSU posters were writing about how by using the Massey ratings they could play with so many and beat most of them. This weeks Massey ratings:

UND-127

UNI -130

NDSu-138

Winona-160

St. Cloud-167

SDSU-178

According to this and they were legitimate last year....UND is the best in the area!

Posted

Oh God!! Not another Fan of the Massey!!! Even if it does make my team look very favorable, I still hate the thing. If any of you are on the D2 board and listen to Bulldog sing the praises of Massey, you would know what I'm talking about. :p

Posted

I, along with most people, do not put much stock in computer rankings of any sort. Luckily, in Division II, along with all lower divisions, we don't have to worry about these rankings deciding champions. They provide an interesting conversation piece, but things are decided on the field like they are supposed to be.

Posted

I have the same problem with the Massey ratings now as I did last Spring -- the all-division matrix of game outcomes is way too sparse interdivisionally for meaningful analysis.

Many of us Sioux fans start think about statistical ratings from a Bradley-Terry perspective: there are only about 20 interactions between the DII cluster of teams and the DIAA cluster of teams; those interactions are also overwhelmingly won by DIAA so you can't form a win-tie path from each team to each other.

Massey attempts to work around that sparseness problem by not just analyzing wins/losses, but the score of each contest. UND was a team that, famously, last year played to the level of each opponent, yet they still managed to win; is a team that runs up the score really better? Massey does correct for that with what he calls a Bayesian Correction, which basically reduces the importance of score differential as more win-loss information becomes available.

Right now, when win-loss information is sparse, UND is being rewarded massively for blowing out Mesa State. As more win-tie connections are made throughout the matrix, the scores will become less important and Massey's ratings will come to resemble Bradley-Terry, though there will always be the score differential kludge linking parts of the matrix that aren't naturally linked by a win-tie path.

Posted
I have the same problem with the Massey ratings now as I did last Spring -- the all-division matrix of game outcomes is way too sparse interdivisionally for meaningful analysis.

Many of us Sioux fans start think about statistical ratings from a Bradley-Terry perspective: there are only about 20 interactions between the DII cluster of teams and the DIAA cluster of teams; those interactions are also overwhelmingly won by DIAA so you can't form a win-tie path from each team to each other.

Massey attempts to work around that sparseness problem by not just analyzing wins/losses, but the score of each contest. UND was a team that, famously, last year played to the level of each opponent, yet they still managed to win; is a team that runs up the score really better? Massey does correct for that with what he calls a Bayesian Correction, which basically reduces the importance of score differential as more win-loss information becomes available.

Right now, when win-loss information is sparse, UND is being rewarded massively for blowing out Mesa State. As more win-tie connections are made throughout the matrix, the scores will become less important and Massey's ratings will come to resemble Bradley-Terry, though there will always be the score differential kludge linking parts of the matrix that aren't naturally linked by a win-tie path.

Statistics?!

What kind of useless garbage is that?

Why don't you go try and fit a line to a scatter plot?

Posted
Statistics?!

What kind of useless garbage is that?

Why don't you go try and fit a line to a scatter plot?

Sorry, didn't mean to confuse the NDSU-educated.

I don't share the opinion that all sports statistics, including Massey's ratings, are "useless garbage". Rather, I was just warning that we should bear in mind how ratings are formulated so we know what relative ratings actually mean. The one element of Massey's ratings that I forgot to mention was that it uses last season's results as a beginning baseline, so UND's rating is also being elevated for last season's performance.

Posted

Sorry, didn't mean to confuse the NDSU-educated.

I don't share the opinion that all sports statistics, including Massey's ratings, are "useless garbage". Rather, I was just warning that we should bear in mind how ratings are formulated so we know what relative ratings actually mean. The one element of Massey's ratings that I forgot to mention was that it uses last season's results as a beginning baseline, so UND's rating is also being elevated for last season's performance.

I was joking.

Posted

Here are some facts, not stats, not ratings

NDSU DIAA

Valpo DIAA

UND D2

Mary NAIA

Montana Tech NAIA (painful for us Bison Fans, but True)

This is what most people see, then they look at the rankings, the conference standings, etc. Only nerds like me look at ratings.

Jim, I really enjoyed the background on the Massey that you did.

Posted

What I've never understood is how people say they use statistics to figure out these ratings. How can you gain any precision in these ratings when the games aren't replicated? Perhaps I know just enough about statistics to ask really stupid questions, I don't know.

Posted
Polls are for entertainment.

The only way to know is to play the game.

Thanks for the wonderful insight :p . Massey's not a poll, it's a rating system, and I'm just curious as to how it's computed with any sort of precision.

Posted
What I've never understood is how people say they use statistics to figure out these ratings.  How can you gain any precision in these ratings when the games aren't replicated?  Perhaps I know just enough about statistics to ask really stupid questions, I don't know.

Ignoring the sparseness problem for now, the link Sicatoka provided includes a brief description and links to more in-depth descriptions of the Bradley-Terry technique. It was developed independently in a few different instances to perform contest judging when a full round-robin isn't possible (e.g. dog show, or food competition). By including each entrant in a minimum number of pair comparisons and with no unconnected pockets in the comparison matrix, you can then use the technique to infer what would have happened had two uncompared items been compared.

Of course, sporting events are performance-based so include an element of randomness that makes the technique non-perfect, but it's still a great place to start. The problem to which I think you were alluding is the sparseness issue I was describing earlier. We don't usually find the Bradley-Terry results to be predictive until each team has about 15 hockey games in the can (a good 25% of which are usually inter-conference, creating those crucial linkages).

To compensate for the shortness of the football season and the desire for immediate ratings, Massey includes two additional techniques: starts with a base of last year's ratings (which diminishes in weight as real results come in), and uses scores to give more precision about team strength rather than just wins/losses (which also diminishes as more results come in, and which prevents those rankings from being part of the BCS because score-based ratings are considered sketchy by the NCAA).

Posted
For instance, in agronomy research I can use ANOVA to see how precise my results are (p values, LSD's, confidence intervals, standard errors, etc.)  Is there any way to "fit" your predicted outcomes with actual outcomes?

First, I'm not by any stretch a statistician -- I'm just interested in sports ratings and learned enough to implement the ratings on this site: (PWR, RPI, KRACH).

However, analysis of variance strikes me as the wrong path because I tend to think of that as useful for comparing numerous factors between independent groups (though a Bison fan could use ANOVA to compare the power of the divisions to prove that they're really different!) Because Massey's technique uses scores rather outcomes, the model doesn't perfectly fit past results so it is derived via a maximum likelihood estimate, the best indiciation of error of which is the standard deviation (which Massey does provide). He says 68% of observed game results will fall within one deviation of the expected result.

Posted

Massey's football ratings look a lot better interdivisionally now (not surprising, since last weekend featured a lot more interactions between divisions, including a few DII wins over DIAA). The top teams in IAA are bubbling up above the top of II, as they undoubtedly should be.

UND still hanging strong at 120, #2 in DII.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...