clock menu more-arrow no yes mobile

Filed under:

Estimating Catcher Defensive Value: How Good Are Navarro and Shoppach?

Trying to evaluate catcher defensive is really, really, really tough.  It used to be that evaluating defense as a whole was really tough to do, but advances like the Dewan +/- system and Ultimate Zone Rating (UZR) have changed that.  Nowadays, the only defensive frontier left to be tackled is catching, but it's quite the doozy.  Tom Tango, Bill James, Matt KlaassenDavid Gassko, and countless others have done some great work on this subject, but at the moment we're still not able to statistically assess a catcher's defensive contributions as specifically as we can other positions.  The big problem is simply that there are so many variables to control for when looking at catchers.  How do you separate what's the pitcher and what's the catcher?  How to you quantify "framing"?  What about the umpire?  How big of an impact do passed balls have?  What about caught stealings?  It's...well, a lot.

Over this weekend, I had a bit of a brainstorm and figured I'd try to tackle catcher defense from a slightly different angle.  I'm no statistician so I can't do all the crazy number tricks that the likes of Tango, Lichtmen, and James, but I think I figured out a simple way to estimate a catcher's defensive value.  It's crude and there are plenty of holes in it, but...well, I had fun with it.  If you want to read the long version, feel free to check out my FanPosts at Beyond the Boxscore, but I'll summarize things below for all those that want the simplified version.

Let me start with a disclaimer: before I started this analysis, I'd done essentially no research whatsoever on all the current advances in quantifying catcher defense.  Take it for what it is: a fun little experiment.

All right, so, big picture time.  Imagine if you could graph the exact defensive ability of every major league baseball player over every season for the last 50 years.  What would that graph look like?  Well, it'd probably look something like a normal distribution (AKA: normal curve, bell curve, Gaussian distribution), right?  The majority of seasons would fall close to the average - within one or two standard deviations from the mean - with a smaller number of outlier seasons on both sides.  Determining what the average would be is easy enough (0 UZR, or a neutral defensive season) and if we wanted, the furthest limits on both ends could be easily established by looking back through historical UZR data.

Within that normal distribution, though, we'd have seasons from first basemen, second basemen, shortstops, catchers, etc.  We all know about Bill James' defensive spectrum, where he ranks the positions in order of increasing defensive difficulty (for a refresher: 1B, LF, RF, 3B, CF, 2B, SS, C).  With this in mind, what would the graphs look like if we broke defensive ability down by position?  Would these subsets of the normal defensive distribution look like normal distributions themselves, or would they vary slightly from position to position?

Going into the research, I hypothesized that the positions would all have slightly different distributions, with positions higher on the defensive spectrum having a higher average UZR score since defense is highly valued at those positions.  My research didn't hold that out, though; using UZR data for all regular defensive players (minimum 500 innings played) from 2002-2009, I found that the distribution of fielding ability is actually pretty uniform across the positions.  All positions had means around zero (average across all positions = 0.36 UZR) and the standard deviations between positions was also fairly similar (average standard deviation = 8.4 UZR).

Here's where my research takes a couple leaps of faith.  After concluding that all positions have similar defensive means and standard deviations, I decided that it was only logical to then assume that catching would follow the same rules.  Or at least, I figured that without any data to prove one way or the other that catching is inherently different than other positions*, the best estimate we can make is by using the information we already have from the other positions.  Also, playing the outfield requires much different skills than does playing the infield - different reaction time, different arm requirements, different footwork, different positioning, etc. - and all of those positions have comparable means and spreads in UZR scores.  So I am making an assumption here, but I think it's one that's not too far-fetched at least. 

* Well, I've now done some research on defensive catching data and it seems that researchers do believe the spread of defensive scores is different from that of the other positions.  Most seem to think it's about half the spread as the other positions, so about 20 runs separating the worst and best catcher in the league.  For now, though, let's continue with my assumption and I'll come back and address this point at the end.

Anyway, the rest of the work is really simple from this point on.  Now that we have a UZR distribution with a mean and standard deviation decided upon for catchers (mean = 0.34; standard deviation = 8.4), all we need is a ranking of catchers to plug into the model.  Tango's Fan Scouting Report does the job perfectly.  Below you'll see the 40 catchers from last season with more than 500 innings behind the plate, as ranked by the Fan Scouting Report (the "Value" column).  The colors coordinate with how many standard deviations from the Fan Scouting Report mean they fall, with green signifying above the mean and red below the mean.  I then translated those values into UZR scores.  In other words, if a player is one standard deviation above the Fan Scouting Report mean, I then gave them a value of 8.4 UZR - one standard deviation above the UZR mean.  Anyway, here are the results:

If you think the actual distribution of catching scores is smaller than this, then simply divide the UZR scores in half.  We can then express a player's defensive contributions as a range, like "Navarro was most likely a -3 to -6 fielder behind the plate last season" or "Shoppach was most likely a -4 to -8 fielder last year."

Obviously this method has its failing points, but like I said, it was merely a fun exercise on my part and it should only be used at the most as a rough estimate.  Don't use this to conclusively say that Navarro and Shoppach are horrible defensive catchers and should be run out of town; this is only one year of data, it relies on fan scouting reports, and it's very inexact.  Take this information for what it is: fun data that doesn't necessarily mean much.