The men's college volleyball blog
Off the Block has started an award for the top (what else?) blocker of the year. I was invited to be a voter and agreed to do so. Naturally, I decided to apply statistical techniques to help me determine my votes (for first, second, and third place). The simplest thing to do would be to examine the official NCAA men's statistics on
blocks per game (set) and just take the leaders. However, several factors make the unfiltered, mechanical use of this statistic unacceptable to me, thus requiring procedural adjustments to make the blocking statistics more meaningful.
REFINING THE ANALYSIS
First and foremost of variables to consider is the varying quality of competition. The Mountain-Pacific Sports Federation is the dominant conference, with teams from the MPSF (or its West Coast forerunners) winning 38 of the 41 NCAA men's volleyball titles that have been contested to date (further 32 of the 41 finals have been all-MPSF/forerunner affairs). The cream of the
Midwest (MIVA) and
East (EIVA) conferences, such as Ohio State and Penn State respectively, might be comparable to mid-upper MPSF teams, depending on the year, but many MIVA and EIVA teams would not be on a par with the MPSF. Some schools that field men's volleyball teams are so small, I have not heard of them.
Even within the MPSF, however, there is considerable variation in quality. Though teams' fortunes shift year-to-year to some extent, squads such as USC, Stanford, BYU, UCLA, and UC Irvine have generally been a lot tougher than UC San Diego and University of the Pacific. I will get into the details later, but my key point is that quality of competition is something that needs to be taken into account. A strong blocking night against USC or another top MPSF school should receive more credit than one against a bottom-dwelling MIVA or EIVA team.
Second, length of matches should be considered. Using blocks
per game prevents players from racking up the best totals just because their teams have played more five-game and fewer three-game matches than have other teams. However, the per-game basis is not perfect either, as there are more opportunities to record blocks (and other statistical accomplishments) in, say, 25-23 than in 25-10 games. Opponent quality is thus a mixed bag, as it's presumably easier to register accomplishments against a weaker (rather than stronger) opponent, but the match may not last that long!
Third, block statistics (either per game or total) only tell (defensive) success stories. Statistics are also compiled on
blocking errors (i.e., touching the net or committing other technical violations while attempting to block an opponent's spike). Calculation of hitting percentage involves subtracting hitting errors from successful kills (before dividing by attempts), so why not subtract blocking errors from successful blocks?
A final factor I considered is home-away location, but it ended up having no correlation to blocking success in my analyses.
STEPS TAKEN
To account for quality of competition, I did two things. First, I restricted my list of contenders for the blocking award to MPSF players. When I began work on my analyses, the
top five teams nationally were all from the MPSF, so it seemed clear that all (or most) of the top blockers leading their respective teams into NCAA title contention would be included. Second, each candidate player's game-by-game blocking statistics were evaluated in the context of each opposing team's season-long hitting percentage (as of March 27, the most recent statistics available when I began work on this analysis; based only on conference matches). Thus, the fact that a given player recorded X number of blocks against USC (hitting percentage .368) rather than against Cal State Northridge (.225) would be duly noted.
Also for each match played by a Blocker of the Year contender, I recorded the total number of points in the match (e.g., if a match went 25-20, 25-21, 25-19, there would have been 135 points played). One of the key variables I derived for each player in each of his matches was
successful block rate, calculated as:
(Blocks - Block Errors) / Total Points in Match
(Consistent with NCAA stat-keeping, I simply added a given player's solo blocks and block assists to obtain his total blocks. Solo blocks were pretty rare, in any case.)
Who were the contending players in my analyses? To keep the scope of the analysis manageable, I ended up selecting players who (a) played for a national top five team, and (b) were in the national top 50 in blocks per game. The players who fulfilled these criteria (listed alphabetically) were:
Antwain Aguillard, Long Beach State
Gus Ellis, Stanford
Ryan Meehan, Long Beach State
Eric Mochalski, Stanford
Steven Shandrick, USC
Otavio Souza, BYU
Futi Tavana, BYU
Austin Zahn, USC
RESULTS
One thing I did was create for each player, with a data point for each of his MPSF conference matches, a plot of successful block rate by opponent offensive quality (hitting percentage). Because of the deadline for when votes are due, I could not include the final weekend of play, so players will tend to have fewer than the 22 possible data points. Let's look at a couple of examples (you can click on the graphics to enlarge them).
In the upper-left for BYU's Futi Tavana, a leading contender, is a data point for his team's second match at University of the Pacific (due to travel considerations, BYU plays each MPSF opponent either twice in Provo or twice on the road, on back-to-back nights, whereas most other teams alternate home-and-away with each opponent; Hawai'i does the same as BYU). Tavana had a net +9 blocks (10 blocks - 1 error), which when divided by the 127 total points played, yields .07 on the vertical axis; on the horizontal axis is Pacific's team hitting percentage of .233. Selected other matches are similarly identified in the graph.
The same kind of graph is shown for USC's Steven Shandrick, another top contender. As labeled on the graph, Shandrick came up big in matches against the Trojans' best-hitting opponents.
In the next figures, I no longer show individual data points, instead just the trend lines for the eight contenders. Nearly all the lines slope downward, consistent with the expectation that blocking performance would decline as one faced better-hitting teams. One apparent exception is Shandrick, whose line slopes upward. As shown above, however, Shandrick had a particularly poor blocking match against weak-hitting Cal State Northridge. Without that match, the upward trend would be diminished or eliminated.
CONCLUSION AND MY VOTES
Tavana and Shandrick were the top blockers against the best-hitting opponents, under my criteria. According to their trendlines, each would block at around a .025 level against a hypothetical .400-hitting team (USC's .368 was the conference's best hitting for a team, as of when I observed the data). At all other levels of opponent hitting percentage, Tavana outblocked Shandrick. On this basis, I award my first-place vote to Tavana, and my second-place vote to Shandrick. The battle for third-place was a close call between a few different players, but ultimately, I thought the results pointed to USC's Austin Zahn for third.
If, as I suggested earlier, one had simply looked at the official NCAA statistics, the case for Tavana as the top blocker also would have been strong, with a lot less work involved! At 1.50 blocks per game, Tavana was second (at this writing) to Shaun Sibley of George Mason University (1.55), but the latter would have faced less challenging opposition, playing in the EIVA.
However, Shandrick (tie 25th) and Zahn (19th) would not have immediately stood out in the national rankings as being worthy of top 3 votes on my ballot. For that reason, I think my more elaborate analysis was warranted.