Because conference play gets underway very quickly in men's college volleyball (usually just after an early-season tournament or two) and the top teams tend to be concentrated in the Mountain Pacific Sports Federation (MPSF), fans don't have to wait very long for marquee match-ups to take place.
Just this past weekend, No. 1 BYU hosted No. 5 USC for a pair of matches (BYU and Hawai'i always play a given opponent in a two-match home series or road series, presumably to cut down on travel). Also, No. 3 UCLA hosted No. 4 Stanford.
I will focus on the BYU-USC series, as better inferences can be made from two matches than from one. BYU won both matches, but each was highly competitive. The Cougars took Friday night's first match in four games (sets), and Saturday night's rematch in five (15-13 in the fifth, in fact).
As is customary, I stress hitting percentages and the teams' allocation of spike attempts. The following graphic (on which you can click to enlarge) presents this information for the Cougars' and Trojans' main hitters.
As can be seen, the stalwart for BYU was Taylor Sander, a 6-foot-4 sophomore outside (left-side) hitter who hit .419 and .383 in the two matches, taking an average of 45 swings per night. Robb Stowell, a 6-7 senior opposite (right-side) hitter, hit .375 the second night on 40 attempts, after hitting only .200 the first. Russ Lavaja (6-7, junior) contributed .364 and .455 offensive outings, although as is common for a middle blocker he had relatively few spike attempts.
I watched the latter parts of Friday night's match on BYU TV and I can tell you that Sander and Stowell were just pummeling the ball.
'SC was led by two pillars of last year's team. Tony Ciarelli (6-6, senior) and Steven Shandrick (6-7, senior). Ciarelli was very steady, hitting .351 and .364 in the two matches; on Saturday, he took a Herculean 55 spike attempts. Shandrick hit .500 and .375.
The Trojans also feature a number of frosh players, led by setter Micah Christenson. Fellow newcomer MB Robert Feathers led USC with 8 block assists on Friday, but didn't do much (statistically at least) on Saturday. MB Ben Lam, who played only in Saturday's match, recorded an error-free 7 kills on 8 attempts, for an .875 hitting percentage.
By Game 5 of Saturday's match, the two teams' offenses were firing on all cylinders, with the Cougars outhitting the Trojans, .545 (13 kills and only 1 error, on 22 swings) to .474 (10-1-19).
BYU outblocked USC, 15-11 in total team blocks Friday and 13-9.5 Saturday. (There really is no such thing as a half-block in the aggregate; what happens is that on a triple-block, each player is credited with a half-block instead of a one-third block, resulting in the "phantom" half-block in the totals.)
Keeping with the theme of hot-hitting, UCLA registered a .376 percentage in sweeping Stanford. Leading the Bruins (among players with 10 or more swings) were MB Thomas Amberg (.600), OH Gonzalo Quiroga (.538), and MB Weston Dunlap (.333). The Cardinal's Brad Lawson, star of the 2010 NCAA championship match, hit .333, but Stanford as a whole hit only .179. UCLA also enjoyed a large blocking advantage, 8.5-2.
A couple of days ago, the Los Angeles Times had a feature article on UCLA coach Al Scates, who is retiring at the end of this, his 50th, season at the Bruin helm.
Texas Tech professor Alan Reifman uses statistics and graphic arts to illuminate developments in U.S. collegiate and Olympic volleyball.
Sunday, January 22, 2012
Thursday, January 12, 2012
Hot Hand in Volleyball?
Science News has just published an article on research by German and Austrian investigators purporting to document a hot hand in volleyball spiking, and the reporter was nice enough to contact me for comment. (I operate another blog, on the statistical study of sports streakiness, and even have a book out on the subject.)
A hot hand in this context would mean that a player who has successfully put away a few kills in a row would have a higher likelihood of a kill on his or her next spike than the player's long-term kill percentage would suggest. A cold hand would represent the opposite, that a player whose last few spike attempts have resulted in errors (e.g., ball hit out of bounds) would have higher than usual odds of an error on the next attempt than his/her long-term percentages would suggest.
Within the constraints of the data set to which the authors had access (partial game-sequence data from top players in a German men's professional league), the analyses were conducted with full rigor and in a manner consistent with previous hot hand research. However, as I elaborate below, I feel there was at least one major limitation in the available data.
One type of analysis done by the authors used the runs test. This statistical technique requires the researcher first to list the sequence of events, in this case, a given player's order of kills (K) and errors (E). A "run" is an uninterrupted sequence of the same outcome, either all K's or all E's. The following hypothetical sequence, with few runs, would indicate streaky performance (i.e., clustering of K's and of E's):
KKKKEEEKKKKK (3 runs)
Another hypothetical sequence (with the same number of total attempts), this time with many runs, would indicate less (or absent) streakiness:
KKEKEEKKKKEK (7 runs)
According to the Science News piece:
An analysis of playoff data from the 1999/2000 season for 26 top scorers in Germany’s first-division volleyball league identified 12 players as having had scoring runs that could not be chalked up to chance. Hot-handed players’ shots contained fewer sequences of consecutive scores than expected by chance, the result of a small number of especially long scoring runs.
As we know, however, there is a third category of outcome for spike attempts, namely the ball is dug up (or otherwise kept in play) by the defense, and the rally continues. As I told the reporter, I definitely think those hit attempts should have been included in the analyses, but they apparently were unavailable in the data set the authors received. Hitting errors were very rare in the data, so balls kept in play may have been a better measure than errors of unsuccessful spike attempts.
(Cross-posted with Hot Hand in Sports.)
A hot hand in this context would mean that a player who has successfully put away a few kills in a row would have a higher likelihood of a kill on his or her next spike than the player's long-term kill percentage would suggest. A cold hand would represent the opposite, that a player whose last few spike attempts have resulted in errors (e.g., ball hit out of bounds) would have higher than usual odds of an error on the next attempt than his/her long-term percentages would suggest.
Within the constraints of the data set to which the authors had access (partial game-sequence data from top players in a German men's professional league), the analyses were conducted with full rigor and in a manner consistent with previous hot hand research. However, as I elaborate below, I feel there was at least one major limitation in the available data.
One type of analysis done by the authors used the runs test. This statistical technique requires the researcher first to list the sequence of events, in this case, a given player's order of kills (K) and errors (E). A "run" is an uninterrupted sequence of the same outcome, either all K's or all E's. The following hypothetical sequence, with few runs, would indicate streaky performance (i.e., clustering of K's and of E's):
KKKKEEEKKKKK (3 runs)
Another hypothetical sequence (with the same number of total attempts), this time with many runs, would indicate less (or absent) streakiness:
KKEKEEKKKKEK (7 runs)
According to the Science News piece:
An analysis of playoff data from the 1999/2000 season for 26 top scorers in Germany’s first-division volleyball league identified 12 players as having had scoring runs that could not be chalked up to chance. Hot-handed players’ shots contained fewer sequences of consecutive scores than expected by chance, the result of a small number of especially long scoring runs.
As we know, however, there is a third category of outcome for spike attempts, namely the ball is dug up (or otherwise kept in play) by the defense, and the rally continues. As I told the reporter, I definitely think those hit attempts should have been included in the analyses, but they apparently were unavailable in the data set the authors received. Hitting errors were very rare in the data, so balls kept in play may have been a better measure than errors of unsuccessful spike attempts.
(Cross-posted with Hot Hand in Sports.)
Sunday, January 1, 2012
Comparing Forecasting Models for 2011 Women's NCAA Tourney
Happy New Year! I wanted to close out discussion of the 2011 NCAA women's volleyball tournament by examining the effectiveness of my newly developed Conference-Adjusted Combined Offensive/Defensive (CACOD) ranking system at predicting the outcome of tournament matches.
Details of the formula and the full set of CACOD rankings are available here. In short, however, for each team in the NCAA field, the CACOD took the "ratio of its own overall [regular] season hitting percentage (offense) divided by the overall hitting percentage it has allowed the opposition (defense)." This ratio was then multiplied by an adjustment factor based on a team's conference (the stronger the conference, the more the adjustment factor raised the team's ranking).
For each of the 63 matches in the tournament, I simply looked at whether the team with the higher CACOD rating won or lost. The CACOD's record is shown below, along with those from other leading rating systems (shown in a screen capture from a VolleyTalk discussion thread). You may click on the graphics below to enlarge them.
The CACOD successfully predicted the winner of 45 tournament matches, which means it generally did as well as the more established ranking systems did. (The reason some of the above records include only 62 matches is that the captured image was from before the final match.* I suspect that, in cases where other systems' records don't add up to 62, it's because some matches featured teams that were tied in the rankings.) What's unique about the CACOD is that teams' win-loss records during the regular season play no role in formulating the rankings, just offensive and defensive hitting-percentage statistics.
During the Sweet Sixteen round and beyond, the CACOD seemed to outperform the other systems, as the CACOD didn't do so well regarding the 48 matches of the first two rounds (32 in the first round, 16 in the second). The results of the two initial rounds are shown in the next graphic.
Indeed, the CACOD trailed the top performing system (Pablo) by five matches after the first two rounds. However, the CACOD went 10-5 the rest of the way to catch up. The results of the last 15 matches are listed below, with teams' CACOD rankings at the close of the regular season shown in parentheses. Successful predictions appear in black, unsuccessful ones in red.
Sweet Sixteen
Texas (7) d. Kentucky (39)
UCLA (11) d. Penn State (12)
Florida St. (28) d. Purdue (2)
Iowa St. (9) d. Minnesota (31)
Illinois (14) d. Ohio State (25)
Florida (10) d. Michigan (33)
USC (5) d. Hawai'i (6)
Pepperdine (29) d. Kansas St. (40)
Elite Eight
UCLA (11) d. Texas (7)
Florida St. (28) d. Iowa St. (9)
Illinois (14) d. Florida (10)
USC (5) d. Pepperdine (29)
Final Four
UCLA (11) d. Florida St. (28)
Illinois (14) d. USC (5)
Championship
UCLA (11) d. Illinois (14)
For next year, I may tweak the formula a little to, for example, place greater weight on hitting-percentage statistics from later in the season than earlier. Seeing how the CACOD did in the end, however, any revisions will likely be more minor than I had expected would be the case after the first two rounds!
---
*I overlooked this point in my original posting, but have now added it.
Details of the formula and the full set of CACOD rankings are available here. In short, however, for each team in the NCAA field, the CACOD took the "ratio of its own overall [regular] season hitting percentage (offense) divided by the overall hitting percentage it has allowed the opposition (defense)." This ratio was then multiplied by an adjustment factor based on a team's conference (the stronger the conference, the more the adjustment factor raised the team's ranking).
For each of the 63 matches in the tournament, I simply looked at whether the team with the higher CACOD rating won or lost. The CACOD's record is shown below, along with those from other leading rating systems (shown in a screen capture from a VolleyTalk discussion thread). You may click on the graphics below to enlarge them.
The CACOD successfully predicted the winner of 45 tournament matches, which means it generally did as well as the more established ranking systems did. (The reason some of the above records include only 62 matches is that the captured image was from before the final match.* I suspect that, in cases where other systems' records don't add up to 62, it's because some matches featured teams that were tied in the rankings.) What's unique about the CACOD is that teams' win-loss records during the regular season play no role in formulating the rankings, just offensive and defensive hitting-percentage statistics.
During the Sweet Sixteen round and beyond, the CACOD seemed to outperform the other systems, as the CACOD didn't do so well regarding the 48 matches of the first two rounds (32 in the first round, 16 in the second). The results of the two initial rounds are shown in the next graphic.
Indeed, the CACOD trailed the top performing system (Pablo) by five matches after the first two rounds. However, the CACOD went 10-5 the rest of the way to catch up. The results of the last 15 matches are listed below, with teams' CACOD rankings at the close of the regular season shown in parentheses. Successful predictions appear in black, unsuccessful ones in red.
Sweet Sixteen
Texas (7) d. Kentucky (39)
UCLA (11) d. Penn State (12)
Florida St. (28) d. Purdue (2)
Iowa St. (9) d. Minnesota (31)
Illinois (14) d. Ohio State (25)
Florida (10) d. Michigan (33)
USC (5) d. Hawai'i (6)
Pepperdine (29) d. Kansas St. (40)
Elite Eight
UCLA (11) d. Texas (7)
Florida St. (28) d. Iowa St. (9)
Illinois (14) d. Florida (10)
USC (5) d. Pepperdine (29)
Final Four
UCLA (11) d. Florida St. (28)
Illinois (14) d. USC (5)
Championship
UCLA (11) d. Illinois (14)
For next year, I may tweak the formula a little to, for example, place greater weight on hitting-percentage statistics from later in the season than earlier. Seeing how the CACOD did in the end, however, any revisions will likely be more minor than I had expected would be the case after the first two rounds!
---
*I overlooked this point in my original posting, but have now added it.
Subscribe to:
Posts (Atom)
Semi-Retirement of VolleyMetrics Blog
With all of the NCAA volleyball championships of the 2023-24 academic year having been completed -- Texas sweeping Nebraska last December t...
-
Two years ago, I created a very simple prediction equation for the NCAA women's tournament. Each team gets its own value on the predicti...
-
I was invited once again this year to vote for the Off the Block men's collegiate volleyball awards . The number of awards has increased...
-
With this year's NCAA women's Final Four getting underway Thursday night in Seattle, today's posting offers some statistical obs...