Monday, April 25, 2011

2011 "Off the Block" Men's Blocking Award Announced

The winner of the 2011 men's collegiate Blocker of the Year award, bestowed by the blog Off the Block, has been announced. It is Futi Tavana of BYU. I was a member of the voting panel of coaches and writers, with each of us being able to vote for our top three candidates and the winner being determined on a point system. In my previous message (directly below the present one), I explained both the statistical process I followed in arriving at my votes and the particular players for whom I voted.

As shown in the following table, my vote for first place matched dead-on with the larger panel, my vote for third was in the same ballpark (or in this case, gym) as the panel, and my vote for second was nowhere close to the panel!

Player                           
 My Vote 
Official Results  
Futi Tavana (BYU)        
      1st     
           1st           
Steven Shandrick (USC)
     2nd    
        Tie 10th      
Austin Zahn (USC)         
     3rd    
         Tie 4th       

Why did Shandrick receive so little support from the other panelists? He amassed a nice total of blocks, with some of his highest individual-match numbers coming against USC's most proficient-hitting opponents. Is there some factor I overlooked that made Shandrick's totals artificially high? I'd be very interested to see people's views of the issue in the comments.

Thursday, April 14, 2011

My Ballot for Men's College Blocker of the Year

The men's college volleyball blog Off the Block has started an award for the top (what else?) blocker of the year. I was invited to be a voter and agreed to do so. Naturally, I decided to apply statistical techniques to help me determine my votes (for first, second, and third place). The simplest thing to do would be to examine the official NCAA men's statistics on blocks per game (set) and just take the leaders. However, several factors make the unfiltered, mechanical use of this statistic unacceptable to me, thus requiring procedural adjustments to make the blocking statistics more meaningful.

REFINING THE ANALYSIS

First and foremost of variables to consider is the varying quality of competition. The Mountain-Pacific Sports Federation is the dominant conference, with teams from the MPSF (or its West Coast forerunners) winning 38 of the 41 NCAA men's volleyball titles that have been contested to date (further 32 of the 41 finals have been all-MPSF/forerunner affairs). The cream of the Midwest (MIVA) and East (EIVA) conferences, such as Ohio State and Penn State respectively, might be comparable to mid-upper MPSF teams, depending on the year, but many MIVA and EIVA teams would not be on a par with the MPSF. Some schools that field men's volleyball teams are so small, I have not heard of them.

Even within the MPSF, however, there is considerable variation in quality. Though teams' fortunes shift year-to-year to some extent, squads such as USC, Stanford, BYU, UCLA, and UC Irvine have generally been a lot tougher than UC San Diego and University of the Pacific. I will get into the details later, but my key point is that quality of competition is something that needs to be taken into account. A strong blocking night against USC or another top MPSF school should receive more credit than one against a bottom-dwelling MIVA or EIVA team.

Second, length of matches should be considered. Using blocks per game prevents players from racking up the best totals just because their teams have played more five-game and fewer three-game matches than have other teams. However, the per-game basis is not perfect either, as there are more opportunities to record blocks (and other statistical accomplishments) in, say, 25-23 than in 25-10 games. Opponent quality is thus a mixed bag, as it's presumably easier to register accomplishments against a weaker (rather than stronger) opponent, but the match may not last that long!

Third, block statistics (either per game or total) only tell (defensive) success stories. Statistics are also compiled on blocking errors (i.e., touching the net or committing other technical violations while attempting to block an opponent's spike). Calculation of hitting percentage involves subtracting hitting errors from successful kills (before dividing by attempts), so why not subtract blocking errors from successful blocks?

A final factor I considered is home-away location, but it ended up having no correlation to blocking success in my analyses.

STEPS TAKEN

To account for quality of competition, I did two things. First, I restricted my list of contenders for the blocking award to MPSF players. When I began work on my analyses, the top five teams nationally were all from the MPSF, so it seemed clear that all (or most) of the top blockers leading their respective teams into NCAA title contention would be included. Second, each candidate player's game-by-game blocking statistics were evaluated in the context of each opposing team's season-long hitting percentage (as of March 27, the most recent statistics available when I began work on this analysis; based only on conference matches). Thus, the fact that a given player recorded X number of blocks against USC (hitting percentage .368) rather than against Cal State Northridge (.225) would be duly noted.

Also for each match played by a Blocker of the Year contender, I recorded the total number of points in the match (e.g., if a match went 25-20, 25-21, 25-19, there would have been 135 points played). One of the key variables I derived for each player in each of his matches was successful block rate, calculated as:

(Blocks - Block Errors) / Total Points in Match

(Consistent with NCAA stat-keeping, I simply added a given player's solo blocks and block assists to obtain his total blocks. Solo blocks were pretty rare, in any case.)

Who were the contending players in my analyses? To keep the scope of the analysis manageable, I ended up selecting players who (a) played for a national top five team, and (b) were in the national top 50 in blocks per game. The players who fulfilled these criteria (listed alphabetically) were:

Antwain Aguillard, Long Beach State
Gus Ellis, Stanford
Ryan Meehan, Long Beach State
Eric Mochalski, Stanford
Steven Shandrick, USC
Otavio Souza, BYU
Futi Tavana, BYU
Austin Zahn, USC

RESULTS

One thing I did was create for each player, with a data point for each of his MPSF conference matches, a plot of successful block rate by opponent offensive quality (hitting percentage). Because of the deadline for when votes are due, I could not include the final weekend of play, so players will tend to have fewer than the 22 possible data points. Let's look at a couple of examples (you can click on the graphics to enlarge them).

In the upper-left for BYU's Futi Tavana, a leading contender, is a data point for his team's second match at University of the Pacific (due to travel considerations, BYU plays each MPSF opponent either twice in Provo or twice on the road, on back-to-back nights, whereas most other teams alternate home-and-away with each opponent; Hawai'i does the same as BYU). Tavana had a net +9 blocks (10 blocks - 1 error), which when divided by the 127 total points played, yields .07 on the vertical axis; on the horizontal axis is Pacific's team hitting percentage of .233. Selected other matches are similarly identified in the graph.

The same kind of graph is shown for USC's Steven Shandrick, another top contender. As labeled on the graph, Shandrick came up big in matches against the Trojans' best-hitting opponents.

In the next figures, I no longer show individual data points, instead just the trend lines for the eight contenders. Nearly all the lines slope downward, consistent with the expectation that blocking performance would decline as one faced better-hitting teams. One apparent exception is Shandrick, whose line slopes upward. As shown above, however, Shandrick had a particularly poor blocking match against weak-hitting Cal State Northridge. Without that match, the upward trend would be diminished or eliminated.


CONCLUSION AND MY VOTES

Tavana and Shandrick were the top blockers against the best-hitting opponents, under my criteria. According to their trendlines, each would block at around a .025 level against a hypothetical .400-hitting team (USC's .368 was the conference's best hitting for a team, as of when I observed the data). At all other levels of opponent hitting percentage, Tavana outblocked Shandrick. On this basis, I award my first-place vote to Tavana, and my second-place vote to Shandrick. The battle for third-place was a close call between a few different players, but ultimately, I thought the results pointed to USC's Austin Zahn for third.

If, as I suggested earlier, one had simply looked at the official NCAA statistics, the case for Tavana as the top blocker also would have been strong, with a lot less work involved! At 1.50 blocks per game, Tavana was second (at this writing) to Shaun Sibley of George Mason University (1.55), but the latter would have faced less challenging opposition, playing in the EIVA.

However, Shandrick (tie 25th) and Zahn (19th) would not have immediately stood out in the national rankings as being worthy of top 3 votes on my ballot. For that reason, I think my more elaborate analysis was warranted.

Saturday, April 2, 2011

USC Men Get Second Win Over Stanford

Playing at home, the Stanford men made things much more competitive against No. 1 USC last night than in the teams' first meeting back in February, but the Trojans still prevailed, 25-22, 21-25, 25-22, 25-22. As noted in the linked article, "The Cardinal outhit USC (.305 to .290) and had more digs (45 to 39), but the Trojans posted 14 blocks (to Stanford's 6.5) and served 5 aces."

Among the teams' big hitters I discussed in yesterday's preview (below), the offensive star was USC's Tony Ciarelli, with a .500 attack percentage (21 K-4 E-34 TA). Steven Shandrick, a 6-7 middle-blocker amidst an attack oriented toward the outsides, added a .474 performance for the Trojans (10-1-19). Stanford managed to contain two of USC's other weapons, Murphy Troy (.059) and Tri Bourne (.222), but fell victim to the Trojans' depth and balance (box score).

For Stanford, Brad Lawson (.297, 17-6-37) and Spencer McLachlin (.267, 14-6-30) made solid contributions, with Eric Mochalski (.571, 9-1-14) coming up big. Brian Cook did not play.

Another thing I noticed in the box score was the high rate of siding-out (i.e., winning the rally on the opponent's serve) by both teams, which I have thusly plotted (you may click on the graph to enlarge it):

As I noted three years ago in an analysis focused on side-out rates, "Of necessity, the team that achieves a higher side-out rate in a game will win the game." Stanford was steady, for the most part, at a side-out rate around the mid-.60s. 'SC exceeded .70 in two of its game wins, an incredibly high rate. As seen for Game 2, the Cardinal was victorious only when it held the Trojans to a .58 side-out rate, a tall-order for any team.