/cdn.vox-cdn.com/uploads/chorus_image/image/15944097/172526832.0.jpg)
As part of our continuing Fourth of July coverage, I'd like to fire a shot at The Process Report (writing from a phone, so manual links: www.theprocessreport.wordpress.com). (Please note: I love The Process Report. I read everything they write, follow them on twitter, and often learn things I didn't know. If you don't already, you should too.) It's unclear what their relationship is to DRaysBay. Are they the scrappy colonists who have broken away, or are they Mother Britain? Either way, it's all the same revolution.
One of the recurring features at TPR is a running tally of bullpen usage this year as opposed to last year. The last one by Jason Colette. The thought is that quality starters pitch deep into games, saving the bullpen, and that over the course of a season this adds up. The bullpen stays fresh and can pitch better. On a whim, I decided to see if I could show this effect. Here's how it went.
First I plotted bullpen usage in 2012 against SIERA, a good ERA estimator. I use an ERA estimator in an attempt to cut down on the small sample size noise present in relief stats.
Not very impressive. There's a small upward slope, but an r squared of .07 means we shouldn't look for meaningful causality here.
That's okay. I wouldn't expect a very strong relationship. The quality difference between pitchers is larger than the difference between them based purely on rest.
So I took another tack. Maybe we can see the affect of overwork in how a bullpen exceeds or falls short of its projections.
I used Steamer projections to make a weighted expectation for each team's bullpen in 2012, based on innings pitched by each reliever. I then compared that to their actual production, and called it dSIERA. A .1 dSIERA means that a bullpen did 10% better than Steamer expected them to.
First off, the selection bias is obvious. Bullpens on the whole do better than projected, because those pitchers who don't are sent down.
Once more, there's a very slight slope in support of the idea that more work hurts relievers, but the r squared of .008 is WAY too small to call this something that matters.
So what's the deal? Is there another way of looking at bullpen usage and performance that might better establish that it matters. If it doesn't matter, should starters be pulled earlier? Should bullpens be given more work? Is there an extreme of usage where we'd start to see a real difference, and how do we find it?
Anyway, hope you enjoyed my little Lexington Green. Happy fourth, TPR.